SmolLM 2 1.7B Instruct
SmolLM 2 flagship. Open data + open weights at the edge tier.
Overview
SmolLM 2 flagship. Open data + open weights at the edge tier.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Apache 2.0
- Open dataset
Weaknesses
- Llama 3.2 1B is sharper but uses Llama license
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 1.1 GB | 2 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of SmolLM 2 1.7B Instruct.
Frequently asked
What's the minimum VRAM to run SmolLM 2 1.7B Instruct?
Can I use SmolLM 2 1.7B Instruct commercially?
What's the context length of SmolLM 2 1.7B Instruct?
Source: huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify SmolLM 2 1.7B Instruct runs on your specific hardware before committing money.