Back to SafeRag SE

SafeRag SE Support

Get help with your private AI assistant for macOS from the Mac App Store

Submit a Support Request

Found in SafeRag SE menu → About SafeRag SE

Or email us directly at support@corixa.io
We typically respond within 24 hours on business days

System Requirements

macOS Version

macOS 15.0 (Sequoia) or later

Processor

Apple Silicon (M1/M2/M3/M4) required
Intel Macs not supported

Memory

8GB RAM minimum
16GB+ recommended for larger models

Storage

5GB+ free space
(AI models can be 2-20GB each)

Dependencies

None — built-in AI engine via llama.cpp

Network

Required only for initial model downloads

Apple Foundation Models

macOS 26+ required (Tahoe)
Optional, works alongside llama.cpp

Getting Started

1. Install from the Mac App Store

Search for "SafeRag SE" on the Mac App Store, or follow the link from our product page. Click Get/Install to download and install the app.

2. Download Your First AI Model

On first launch, SafeRag SE will guide you through downloading an AI model. You can also use Apple Foundation Models on macOS 26+ without any downloads.

3. Start Chatting

Once a model is ready, you can start a conversation right away. Upload documents to enable RAG mode for context-aware answers from your own files.

Frequently Asked Questions

SafeRag SE won't start — what should I check?

How do I upload documents for RAG?

To use RAG (Retrieval-Augmented Generation) with your own documents:

  1. Open the Documents section in the sidebar
  2. Click the upload button or drag and drop files
  3. Supported formats include PDF, DOCX, TXT, and Markdown
  4. Wait for processing (the app creates vector embeddings using sqlite-vec)
  5. Enable RAG mode in chat to query your documents

AI responses are slow — how can I speed them up?

Where is my data stored?

All SafeRag SE data is stored locally within the app sandbox:

No data ever leaves your Mac. Everything runs locally.

How do I enable encryption?

SafeRag SE uses AES-GCM encryption to protect your chat history and documents:

  1. Go to Settings
  2. Enable the encryption toggle
  3. Set your encryption passphrase
  4. Messages are encrypted at rest in the local database

When encryption is enabled, your data is protected even if someone gains access to your Mac's filesystem.

What is the difference between llama.cpp and Apple Foundation Models?

llama.cpp (Built-in AI Engine):

Apple Foundation Models:

You can use both engines side by side and switch between them per conversation.

How do I connect Ollama (optional)?

SafeRag SE works without Ollama, but you can optionally connect to a running Ollama instance for additional model support:

  1. Install Ollama separately from ollama.com
  2. Ensure Ollama is running on your Mac
  3. In SafeRag SE settings, enable the Ollama provider
  4. Ollama models will appear alongside your local llama.cpp models

This is entirely optional — SafeRag SE's built-in AI engine handles everything on its own.

Troubleshooting

Model Download Failed

Symptoms: Download stops, progress bar freezes, or error message appears

Solutions:

  1. Check your internet connection
  2. Ensure you have enough free disk space (models can be 2-20GB)
  3. Cancel and retry the download
  4. Try a smaller model if you have limited storage
  5. If the issue persists, restart SafeRag SE and try again

RAG Not Finding My Documents

Symptoms: RAG mode is enabled but the AI does not reference your documents

Solutions:

  1. Verify documents have finished processing (look for a completion indicator)
  2. Ensure RAG mode is enabled in the chat input area
  3. Try asking more specific questions related to your document content
  4. Re-upload the document if processing appears to have failed
  5. Check that the document format is supported (PDF, DOCX, TXT, MD)

Known Issues

Best Practices

Still Need Help?

Can't find what you're looking for?

Contact Support

Use the support form above or email us at support@corixa.io

We typically respond within 24 hours on business days

Additional Resources