Mistral Medium 3 24B (dense)
Dense variant in the Mistral Medium 3.5 family. Research license — non-commercial open. Same training data as the MoE flagship but in a smaller dense package.
Overview
Dense variant in the Mistral Medium 3.5 family. Research license — non-commercial open. Same training data as the MoE flagship but in a smaller dense package.
Strengths
- Mistral instruction-following at dense 24B
Weaknesses
- Research license blocks commercial use
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 14.0 GB | 18 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Mistral Medium 3 24B (dense).
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Mistral Medium 3 24B (dense)?
Can I use Mistral Medium 3 24B (dense) commercially?
What's the context length of Mistral Medium 3 24B (dense)?
Source: huggingface.co/mistralai/Mistral-Medium-3-24B-dense
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.