Ollama Models
RecurseChat supports using models from Ollama, a local model server that makes it easy to run open source models on your machine.
Prerequisites
- Install Ollama on your machine
- Start the Ollama server
- Pull the models you want to use with
ollama pull <model-name>
Importing Ollama Models (Recommended)
To import all Ollama models:
- Open RecurseChat settings
- Make sure Ollama is running
- Go to the Models tab
- Click “Import Ollama Models”
The imported models will be automatically configured and ready to use.
Adding single Ollama model via OpenAI compatible endpoint
Alternatively, you can add single Ollama model via OpenAI compatible endpoint.
To add a single Ollama model:
- Go to the Models tab in RecurseChat settings
- Click the “New Model” button in the top right
- Select “New OpenAI Chat Completion model”
- Configure the model:
- Set the base URL to
http://127.0.0.1:11434/v1
- Set the model ID to your desired Ollama model name (e.g.,
mistral
,llama2
, etc.) - Give your model a name
- Set the base URL to
- Click “Save” to add the model
Supported Models
RecurseChat supports all chat models available through Ollama, including:
- Llama 3.3
- DeepSeek-R1
- Qwen 3
- Qwen 2.5‑VL
- Gemma 3
- Mistral
- CodeLlama
- Starling
- And many more
Vision Models
RecurseChat supports Ollama vision models like LLaVA and Bakllava. To use them:
- Pull the vision model:
ollama pull llava
orollama pull bakllava
- Add it as a model in RecurseChat settings
- You can now paste images directly into the chat
Troubleshooting
If you encounter issues:
- Make sure Ollama is running
- Verify the model name matches exactly what you pulled with
ollama pull
- Check that the model is available by running
ollama list
- Try restarting both Ollama and RecurseChat
For more help, visit the Ollama documentation.