SafeRag SE Support
Get help with your private AI assistant for macOS from the Mac App Store
Submit a Support Request
Or email us directly at support@corixa.io
We typically respond within 24 hours on business days
System Requirements
macOS 15.0 (Sequoia) or later
Apple Silicon (M1/M2/M3/M4) required
Intel Macs not supported
8GB RAM minimum
16GB+ recommended for larger models
5GB+ free space
(AI models can be 2-20GB each)
None — built-in AI engine via llama.cpp
Required only for initial model downloads
macOS 26+ required (Tahoe)
Optional, works alongside llama.cpp
Getting Started
1. Install from the Mac App Store
Search for "SafeRag SE" on the Mac App Store, or follow the link from our product page. Click Get/Install to download and install the app.
2. Download Your First AI Model
On first launch, SafeRag SE will guide you through downloading an AI model. You can also use Apple Foundation Models on macOS 26+ without any downloads.
- Go to Settings and browse available models
- Choose a model that fits your hardware (smaller models for 8GB RAM, larger for 16GB+)
- Wait for the download to complete (models range from 2-20GB)
- On macOS 26+, Apple Foundation Models are available immediately with no download
3. Start Chatting
Once a model is ready, you can start a conversation right away. Upload documents to enable RAG mode for context-aware answers from your own files.
Frequently Asked Questions
SafeRag SE won't start — what should I check?
- Verify you are running macOS 15.0 (Sequoia) or later (Apple menu → About This Mac)
- Confirm you have an Apple Silicon Mac (M1, M2, M3, or M4)
- Ensure at least 5GB free disk space
- Try deleting and reinstalling from the Mac App Store
- Check Console.app for error messages mentioning SafeRag SE
How do I upload documents for RAG?
To use RAG (Retrieval-Augmented Generation) with your own documents:
- Open the Documents section in the sidebar
- Click the upload button or drag and drop files
- Supported formats include PDF, DOCX, TXT, and Markdown
- Wait for processing (the app creates vector embeddings using sqlite-vec)
- Enable RAG mode in chat to query your documents
AI responses are slow — how can I speed them up?
- Use smaller models: Choose compact models (3B-8B parameters) for faster responses
- Try Apple FM: On macOS 26+, Apple Foundation Models are optimized for your hardware
- Upgrade RAM: 16GB+ helps significantly with larger models
- Close other apps: Free up system memory and GPU resources
- Reduce context length: Shorter conversations process faster
Where is my data stored?
All SafeRag SE data is stored locally within the app sandbox:
- Location:
~/Library/Containers/io.corixa.SafeRagSE/ - Database: GRDB.swift (SQLite-based)
- Vector store: sqlite-vec for document embeddings
- Encryption: AES-GCM when enabled
No data ever leaves your Mac. Everything runs locally.
How do I enable encryption?
SafeRag SE uses AES-GCM encryption to protect your chat history and documents:
- Go to Settings
- Enable the encryption toggle
- Set your encryption passphrase
- Messages are encrypted at rest in the local database
When encryption is enabled, your data is protected even if someone gains access to your Mac's filesystem.
What is the difference between llama.cpp and Apple Foundation Models?
llama.cpp (Built-in AI Engine):
- Available on macOS 15.0+ (Sequoia)
- Supports a wide range of open-source models (Llama, Mistral, Gemma, etc.)
- Models must be downloaded (2-20GB each)
- Full control over model selection and parameters
Apple Foundation Models:
- Requires macOS 26+ (Tahoe)
- No model download needed — built into macOS
- Optimized by Apple for Apple Silicon hardware
- Seamless integration with the operating system
You can use both engines side by side and switch between them per conversation.
How do I connect Ollama (optional)?
SafeRag SE works without Ollama, but you can optionally connect to a running Ollama instance for additional model support:
- Install Ollama separately from ollama.com
- Ensure Ollama is running on your Mac
- In SafeRag SE settings, enable the Ollama provider
- Ollama models will appear alongside your local llama.cpp models
This is entirely optional — SafeRag SE's built-in AI engine handles everything on its own.
Troubleshooting
Model Download Failed
Symptoms: Download stops, progress bar freezes, or error message appears
Solutions:
- Check your internet connection
- Ensure you have enough free disk space (models can be 2-20GB)
- Cancel and retry the download
- Try a smaller model if you have limited storage
- If the issue persists, restart SafeRag SE and try again
RAG Not Finding My Documents
Symptoms: RAG mode is enabled but the AI does not reference your documents
Solutions:
- Verify documents have finished processing (look for a completion indicator)
- Ensure RAG mode is enabled in the chat input area
- Try asking more specific questions related to your document content
- Re-upload the document if processing appears to have failed
- Check that the document format is supported (PDF, DOCX, TXT, MD)
Known Issues
- First model download: Initial AI model download can take 10+ minutes depending on your internet speed and model size
- Large documents: Very large PDFs (50MB+) may take longer to process for vector embeddings
- Apple FM availability: Apple Foundation Models require macOS 26 (Tahoe) and may not be available on all devices
Best Practices
- Keep macOS and SafeRag SE updated via the Mac App Store
- Use appropriate model sizes for your hardware (8GB RAM = smaller models)
- Enable encryption for sensitive conversations
- Try Apple Foundation Models on macOS 26+ for a zero-download experience
- Regularly review and delete old sessions to save disk space
- Enable FileVault for additional system-level encryption
Still Need Help?
Can't find what you're looking for?
Contact Support
Use the support form above or email us at support@corixa.io
We typically respond within 24 hours on business days