Qwen 3.5 235B-A17B
Qwen 3.5 frontier MoE — 235B total, 17B active. Apache 2.0. Closes the gap with DeepSeek V4 on coding / math while remaining permissively licensed.
Overview
Qwen 3.5 frontier MoE — 235B total, 17B active. Apache 2.0. Closes the gap with DeepSeek V4 on coding / math while remaining permissively licensed.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- GPQA Diamond leader among open models
- Hybrid thinking-mode toggle (think / no_think per turn)
- Strongest multilingual coverage in open-weight 2026
- 17B active params keep tok/s competitive with dense 30B
Weaknesses
- Qwen license caps commercial use at 100M MAU
- 397B total ⇒ workstation territory at Q4 (226 GB)
- Geopolitical refusal posture remains a concern for some deployments
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 226.0 GB | 256 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 3.5 235B-A17B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Qwen 3.5 235B-A17B?
Can I use Qwen 3.5 235B-A17B commercially?
What's the context length of Qwen 3.5 235B-A17B?
Source: huggingface.co/Qwen/Qwen3.5-235B-A17B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.