Hermes 3 Llama 3.2 3B
Nous Research's Hermes 3 fine-tune of Llama 3.2 3B. Strong general-instruction following at the 3B tier.
Overview
Nous Research's Hermes 3 fine-tune of Llama 3.2 3B. Strong general-instruction following at the 3B tier.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Strong 3B instruction tuning
- Apple Silicon edge-friendly
Weaknesses
- 3B parameter ceiling limits depth
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 1.8 GB | 3 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Hermes 3 Llama 3.2 3B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Hermes 3 Llama 3.2 3B?
Can I use Hermes 3 Llama 3.2 3B commercially?
What's the context length of Hermes 3 Llama 3.2 3B?
Source: huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Hermes 3 Llama 3.2 3B runs on your specific hardware before committing money.