Set Up Your Local AI
Scribely works with a local language model for private, free AI assistance. Pick a provider, download a model, and you're ready to go.
Pick a local AI provider
Both options are free and run entirely on your Mac. Your data never leaves your machine. Pick whichever you prefer — they both work great with Scribely.
Ollama
Command-line tool that makes running local models dead simple. Download, run one command, done.
Download OllamaHow to get started
Install the app
Download from ollama.com, open the .dmg, and drag Ollama to Applications. Launch it once — it runs in the menu bar.
Pull a model
Open Terminal and run: ollama pull qwen3:1.7b — the model downloads automatically (~1 GB).
That's it
Ollama runs a local server on port 11434. Scribely detects it automatically — no configuration needed.
LM Studio
Beautiful desktop app with a visual interface for browsing, downloading, and running local models.
Download LM StudioHow to get started
Install the app
Download from lmstudio.ai, open the .dmg, and drag LM Studio to Applications.
Download a model
Open LM Studio, go to the Discover tab (magnifying glass icon), search for "Qwen3 1.7B Instruct", and click Download.
Start the server
Go to the Developer tab (</> icon), and click "Start Server". Scribely connects automatically.
Choose a model
For real-time meeting assistance, you want a small, fast model. Anything under 3B parameters runs smoothly alongside your meeting without hogging resources. We recommend 1.5B-class models for the best speed-to-quality ratio.
Qwen 3 1.7B
Best balance of speed and quality at this size. Excellent for meeting Q&A and summarization.
Ollama
ollama pull qwen3:1.7bLM Studio
Search for Qwen3-1.7B-Instruct in the Discover tab
Gemma 3 1B
Google's compact model. Fastest option, great if you want minimal resource usage.
Ollama
ollama pull gemma3:1bLM Studio
Search for gemma-3-1b-it in the Discover tab
Llama 3.2 3B
Slightly larger but noticeably smarter. Good if your Mac has 16 GB+ RAM.
Ollama
ollama pull llama3.2:3bLM Studio
Search for Llama-3.2-3B-Instruct in the Discover tab
Connect to Scribely
Open Scribely and go to Settings → LLM
Select "Ollama" or "LM Studio" as your provider. Scribely auto-detects the local server.
Pick your downloaded model
The model you pulled or downloaded will appear in the model dropdown. Select it.
You're all set
Start a meeting and ask questions — everything runs locally, privately, and for free.
Don't have Scribely yet?
Download Scribely