Qwen 3 235B-A22B
Qwen 3 flagship MoE. 235B total / 22B active per token, with built-in 'thinking' and 'non-thinking' modes that trade speed for reasoning depth at inference time. Best open-weight reasoning model for many tasks.
Overview
Qwen 3 flagship MoE. 235B total / 22B active per token, with built-in 'thinking' and 'non-thinking' modes that trade speed for reasoning depth at inference time. Best open-weight reasoning model for many tasks.
Strengths
- Switchable thinking mode
- Apache 2.0
- Top-tier reasoning
Weaknesses
- Needs 160GB+ VRAM at Q4
- Multi-GPU only on consumer rigs
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 142.0 GB | 160 GB |
| Q5_K_M | 167.0 GB | 190 GB |
Get the model
Ollama
One-line install
ollama run qwen3:235bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 3 235B-A22B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Qwen 3 235B-A22B?
Can I use Qwen 3 235B-A22B commercially?
What's the context length of Qwen 3 235B-A22B?
How do I install Qwen 3 235B-A22B with Ollama?
Source: huggingface.co/Qwen/Qwen3-235B-A22B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.