Ministral 3B Instruct
Mistral edge model at 3B. Designed for on-device inference with extended 128k context. Research license only.
Overview
Mistral edge model at 3B. Designed for on-device inference with extended 128k context. Research license only.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- 128k context at 3B
- Edge deployable
Weaknesses
- Research license blocks commercial use
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 1.9 GB | 4 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Ministral 3B Instruct.
Frequently asked
What's the minimum VRAM to run Ministral 3B Instruct?
Can I use Ministral 3B Instruct commercially?
What's the context length of Ministral 3B Instruct?
Source: huggingface.co/mistralai/Ministral-3B-Instruct-2410
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Ministral 3B Instruct runs on your specific hardware before committing money.