Mistral Small 3.2 24B
Iterative refresh of Mistral Small 3 24B. Same architecture; improved instruction following and tool-call reliability. Apache 2.0.
Overview
Iterative refresh of Mistral Small 3 24B. Same architecture; improved instruction following and tool-call reliability. Apache 2.0.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Apache 2.0
- Strong instruction following — Mistral tradition
- European multilingual
Weaknesses
- No reasoning-mode toggle
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 14.0 GB | 18 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Mistral Small 3.2 24B.
Frequently asked
What's the minimum VRAM to run Mistral Small 3.2 24B?
Can I use Mistral Small 3.2 24B commercially?
What's the context length of Mistral Small 3.2 24B?
Source: huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Mistral Small 3.2 24B runs on your specific hardware before committing money.