Phi-4 Multimodal
Multimodal variant of Phi-4 14B. Vision + text. Smaller than Llama 4 Scout but covers most image-Q&A workflows; right-sized for 16GB consumer cards.
Overview
Multimodal variant of Phi-4 14B. Vision + text. Smaller than Llama 4 Scout but covers most image-Q&A workflows; right-sized for 16GB consumer cards.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Multimodal at consumer-card scale
- MIT
Weaknesses
- Vision quality below frontier multimodal models
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 9.0 GB | 12 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Phi-4 Multimodal.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Phi-4 Multimodal?
Can I use Phi-4 Multimodal commercially?
What's the context length of Phi-4 Multimodal?
Does Phi-4 Multimodal support images?
Source: huggingface.co/microsoft/Phi-4-multimodal
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.