Qwen 3 30B-A3B
Mid-tier Qwen 3 MoE. 30B total / 3B active means 70B-class quality at 7B-class inference speed on a single 24GB card. The sweet spot of the Qwen 3 lineup for prosumer hardware.
Overview
Mid-tier Qwen 3 MoE. 30B total / 3B active means 70B-class quality at 7B-class inference speed on a single 24GB card. The sweet spot of the Qwen 3 lineup for prosumer hardware.
Strengths
- 3B active params = fast inference
- Apache 2.0
- Thinking mode
Weaknesses
- Total weights still 18GB at Q4
- MoE routing varies in quality
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 18.0 GB | 22 GB |
| Q8_0 | 32.0 GB | 36 GB |
Get the model
Ollama
One-line install
ollama run qwen3:30bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 3 30B-A3B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Qwen 3 30B-A3B?
Can I use Qwen 3 30B-A3B commercially?
What's the context length of Qwen 3 30B-A3B?
How do I install Qwen 3 30B-A3B with Ollama?
Source: huggingface.co/Qwen/Qwen3-30B-A3B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.