Skip to content

Ollama Models

RecurseChat supports using models from Ollama, a local model server that makes it easy to run open source models on your machine.

Prerequisites

  1. Install Ollama on your machine
  2. Start the Ollama server
  3. Pull the models you want to use with ollama pull <model-name>

To import all Ollama models:

  1. Open RecurseChat settings
  2. Make sure Ollama is running
  3. Go to the Models tab
  4. Click “Import Ollama Models”

The imported models will be automatically configured and ready to use.

Adding single Ollama model via OpenAI compatible endpoint

Alternatively, you can add single Ollama model via OpenAI compatible endpoint.

To add a single Ollama model:

  1. Go to the Models tab in RecurseChat settings
  2. Click the “New Model” button in the top right
  3. Select “New OpenAI Chat Completion model”
  4. Configure the model:
    • Set the base URL to http://127.0.0.1:11434/v1
    • Set the model ID to your desired Ollama model name (e.g., mistral, llama2, etc.)
    • Give your model a name
  5. Click “Save” to add the model

Supported Models

RecurseChat supports all chat models available through Ollama, including:

  • Llama 3.3
  • DeepSeek-R1
  • Qwen 3
  • Qwen 2.5‑VL
  • Gemma 3
  • Mistral
  • CodeLlama
  • Starling
  • And many more

Vision Models

RecurseChat supports Ollama vision models like LLaVA and Bakllava. To use them:

  1. Pull the vision model: ollama pull llava or ollama pull bakllava
  2. Add it as a model in RecurseChat settings
  3. You can now paste images directly into the chat

Troubleshooting

If you encounter issues:

  1. Make sure Ollama is running
  2. Verify the model name matches exactly what you pulled with ollama pull
  3. Check that the model is available by running ollama list
  4. Try restarting both Ollama and RecurseChat

For more help, visit the Ollama documentation.