Qwen 3 8B
Qwen 3 at the 8B scale. Direct head-to-head against Llama 3.1 8B on most benchmarks; usually wins on coding and structured output.
Qwen 3 8B introduces a hybrid "thinking" / "non-thinking" toggle into the 7B class. In non-thinking mode it's a tier with Qwen 2.5 7B; in thinking mode it produces visible chain-of-thought and lifts hard-task performance closer to 14B-class models at the cost of latency.
Strengths- Hybrid reasoning toggle —
/thinkand/no_thinkper turn lets you pay for reasoning only when needed. - Improved tool-use over Qwen 2.5 — function-call format more standardized.
- Strong multilingual carryover from the 2.5 generation.
- Thinking-mode output is verbose — tokens-per-answer roughly doubles, eating speed.
- Some prompt-injection vectors specific to the
/thinktoggle that haven't been fully audited. - License remains Qwen-flavored with usage caps.
- Q4_K_M (5.0 GB): 95–115 tok/s decode (non-thinking); 90–110 tok/s thinking but 2× output
- Q5_K_M (5.9 GB): 85–100 tok/s
- Q8_0 (8.4 GB): 65–82 tok/s
Yes, for users who want the best 8B-class capability and are willing to use thinking mode selectively for hard prompts. No, for users who don't need reasoning — Qwen 2.5 7B is simpler and similar speed.
How it compares- vs Qwen 2.5 7B → Qwen 3 8B with thinking mode wins on reasoning; without thinking, near-equal. Pick Qwen 3 if reasoning matters.
- vs Llama 3.1 8B → Qwen 3 8B wins on raw capability; Llama wins on instruction polish + ecosystem maturity.
- vs QwQ 32B → QwQ is the dedicated reasoning specialist at 32B; Qwen 3 8B's thinking mode is a poor man's QwQ at lighter VRAM.
- vs Phi-4 14B → Phi-4 has cleaner reasoning at higher VRAM; Qwen 3 8B fits in less memory.
ollama pull qwen3:8b
ollama run qwen3:8b
# Toggle reasoning per turn:
# /think — enable chain-of-thought
# /no_think — disable
Settings: Q4_K_M GGUF, 8192 ctx, llama.cpp/CUDA, RTX 4090
›Why this rating
8.5/10 — Qwen 3's hybrid reasoning mode in an 8B body. Strong as a 7B-class chat model, with a "thinking" mode that pushes it materially beyond Qwen 2.5 7B on reasoning tasks. Loses points only on ecosystem maturity vs Llama 3.1 8B.
Overview
Qwen 3 at the 8B scale. Direct head-to-head against Llama 3.1 8B on most benchmarks; usually wins on coding and structured output.
Strengths
- Best 8B coder
- Apache 2.0
- Thinking mode
Weaknesses
- More verbose with thinking enabled
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 4.8 GB | 6 GB |
| Q8_0 | 8.2 GB | 10 GB |
Get the model
Ollama
One-line install
ollama run qwen3:8bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 3 8B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Qwen 3 8B?
Can I use Qwen 3 8B commercially?
What's the context length of Qwen 3 8B?
How do I install Qwen 3 8B with Ollama?
Source: huggingface.co/Qwen/Qwen3-8B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.