qwen
397B parameters
Commercial OK

Qwen 3.5 235B-A17B

Qwen 3.5 frontier MoE — 235B total, 17B active. Apache 2.0. Closes the gap with DeepSeek V4 on coding / math while remaining permissively licensed.

License: Qwen License (commercial OK ≤ 100M MAU)·Released May 1, 2026·Context: 262,144 tokens

Overview

Qwen 3.5 frontier MoE — 235B total, 17B active. Apache 2.0. Closes the gap with DeepSeek V4 on coding / math while remaining permissively licensed.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
Qwen 3 72B72B
Datacenter

Strengths

  • GPQA Diamond leader among open models
  • Hybrid thinking-mode toggle (think / no_think per turn)
  • Strongest multilingual coverage in open-weight 2026
  • 17B active params keep tok/s competitive with dense 30B

Weaknesses

  • Qwen license caps commercial use at 100M MAU
  • 397B total ⇒ workstation territory at Q4 (226 GB)
  • Geopolitical refusal posture remains a concern for some deployments

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M226.0 GB256 GB

Get the model

HuggingFace

Original weights

huggingface.co/Qwen/Qwen3.5-235B-A17B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 3.5 235B-A17B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Step up
More capable — bigger memory footprint
No verdicted models in the next tier up yet.

Frequently asked

What's the minimum VRAM to run Qwen 3.5 235B-A17B?

256GB of VRAM is enough to run Qwen 3.5 235B-A17B at the Q4_K_M quantization (file size 226.0 GB). Higher-quality quantizations need more.

Can I use Qwen 3.5 235B-A17B commercially?

Yes — Qwen 3.5 235B-A17B ships under the Qwen License (commercial OK ≤ 100M MAU), which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 3.5 235B-A17B?

Qwen 3.5 235B-A17B supports a context window of 262,144 tokens (about 262K).

Source: huggingface.co/Qwen/Qwen3.5-235B-A17B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.