Command R 35B
Cohere's mid-tier — RAG and tool use. Non-commercial license.
The practical-VRAM Cohere model. Command R 35B fits on a 24 GB card at Q4 with no offload, retains the RAG specialization, and is the right pick for non-commercial RAG workflows on consumer hardware.
Strengths- 22 GB at Q4_K_M — fits on 24 GB cards full-GPU.
- Same RAG training as Command R+ — citation-aware, retrieval-friendly.
- Strong tool-use format.
- CC-BY-NC license — non-commercial only without separate Cohere agreement.
- General quality below Qwen 3 32B at similar VRAM.
- Multilingual strong but narrower than Command R+.
- Q4_K_M (22 GB): 55–70 tok/s decode — full GPU, no offload
- Q5_K_M (26 GB): partial offload, 18–26 tok/s
- Q8_0 (38 GB): workstation territory
Yes, for RAG workflows in non-commercial settings on a 24 GB card. No, for commercial use without the Cohere license, or for general chat where Qwen 3 32B is a better generalist.
How it compares- vs Command R+ 104B → 104B is meaningfully smarter; 35B fits the VRAM budget.
- vs Qwen 3 32B → Qwen wins on general use + license; Command R wins on RAG specifically.
- vs Mistral Small 3 24B → Mistral has cleaner license; Command R has stronger RAG behavior.
ollama pull command-r:35b-q4_K_M
ollama run command-r:35b-q4_K_M
Settings: Q4_K_M GGUF, 16384 ctx, full GPU on RTX 4090
›Why this rating
7.5/10 — Command R+ at a more practical VRAM cost. 35B fits at Q4 in ~22 GB — full GPU on 24 GB cards. Same RAG specialization, license still CC-BY-NC. Loses points on absolute capability vs the larger sibling.
Overview
Cohere's mid-tier — RAG and tool use. Non-commercial license.
Strengths
- RAG-tuned
- Tool use
Weaknesses
- Non-commercial
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 21.0 GB | 26 GB |
Get the model
Ollama
One-line install
ollama run command-r:35bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Command R 35B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Command R 35B?
Can I use Command R 35B commercially?
What's the context length of Command R 35B?
How do I install Command R 35B with Ollama?
Source: huggingface.co/CohereForAI/c4ai-command-r-v01
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.