Configuration
Verified by owner

Ollama: Error: model 'X' not found

Error: model 'X' not found, try pulling it first
By Fredoline Eruo · Last verified May 6, 2026

Cause

You ran ollama run <name> for a model that hasn't been pulled to your local Ollama library yet. Ollama doesn't auto-pull; run only works on already-downloaded models.

A second cause: the model name is slightly off. Ollama is case-insensitive but tag-strict — llama3.1 is different from llama3.1:8b.

Solution

Pull then run:

ollama pull llama3.1:8b
ollama run llama3.1:8b

Or use ollama run directly which auto-pulls if missing in newer versions:

ollama run llama3.1:8b

Check available models in the library:

ollama list

Browse the official Ollama library at ollama.com/library for the exact tag. Common gotcha: the default tag (no :tag suffix) varies by model — llama3.1 defaults to 8b but qwen2.5 defaults to 7b.

Related errors

Did this fix it?

If your case was different, email hello@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.