Llama 3.2 90B Vision
Llama 3.2 multimodal at 90B. Datacenter-tier predecessor to Llama 4 Maverick. Strong visual reasoning.
Overview
Llama 3.2 multimodal at 90B. Datacenter-tier predecessor to Llama 4 Maverick. Strong visual reasoning.
How to run it
Llama 3.2 90B Vision is Meta's base multimodal model — 90B dense text backbone with vision encoder, no instruction tuning. Same architecture as the instruct variant but no chat template — generates completions, not responses. Run at Q4_K_M via llama.cpp with llava-server for vision, or text-only with -ngl 999 -fa -c 4096. Q4_K_M file size ~51 GB (text) + ~3-5 GB (vision projector). Minimum VRAM: 48 GB — RTX A6000 at Q3_K_M for vision, or text-only Q4_K_M. Recommended: A100 80GB for full Q4_K_M + vision. This is a base model — use for fine-tuning, few-shot completion, or as a vision backbone. Not suitable for direct chat without instruction tuning. For chat, use the instruct variant. For serving: vLLM multimodal pipeline (if supported for base vision models — verify). Few-shot prompting with image+text pairs is the primary base-model interaction pattern.
Hardware guidance
Minimum: RTX A6000 48GB at Q4_K_M text-only, or Q3_K_M with vision. Recommended: A100 80GB at AWQ-INT4 for vision serving. Budget: dual RTX 3090 48 GB at Q4_K_M text-only, or Q3_K_M + vision. VRAM math: identical to instruct variant — 90B text at Q4 ≈ 51 GB, vision projector ~3-5 GB, KV cache at 8K ~15 GB. Total with vision at 8K: ~69-71 GB. A100 80GB: comfortable. Dual RTX 3090 48GB: must reduce context or quant. Mac Studio M4 Ultra 128GB: Q4_K_M + vision, 3-6 tok/s. RTX 5090 32GB: text-only Q4_K_M with KV offload. Cloud: single A100 at $5-10/hr. Base variant may not be on Ollama — verify hf repo for GGUF availability.
What breaks first
- Base model, not chat. Generates completions in whatever style the prompt sets. Feeding a conversational prompt produces unpredictable continuations unless formatted as few-shot with completion markers. 2. Vision projector mismatch. Base vision GGUFs require a matching mmproj file. Mixing instruct mmproj with base text GGUF produces garbled vision outputs. 3. Few-shot context contamination. Base models are sensitive to prompt formatting. Extra whitespace or inconsistent few-shot formatting can degrade output quality dramatically. 4. Fine-tuning data format. Fine-tuning this model requires multimodal dataset formatting (text + image interleaving). Standard text-only fine-tuning scripts may not handle image tokens correctly.
Runtime recommendation
Common beginner mistakes
Mistake: Chatting with the base model and wondering why responses are nonsensical. Fix: Base models complete text, they don't follow instructions. Use few-shot completion format or switch to the instruct variant. Mistake: Using the instruct mmproj with base text GGUF. Fix: Vision projector must match the model variant. Download the matching mmproj from the same hf repo. Mistake: Assuming base and instruct have identical image understanding quality. Fix: Instruction tuning affects vision understanding. The base model's raw vision representations may differ from instruct. Test your task on both. Mistake: Fine-tuning without multimodal data format. Fix: The vision encoder expects image tokens in a specific format. Use a multimodal fine-tuning framework, not standard text-only scripts.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Frontier-tier multimodal
- Strong visual reasoning
Weaknesses
- 64GB+ VRAM tier
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 52.0 GB | 64 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Llama 3.2 90B Vision.
Frequently asked
What's the minimum VRAM to run Llama 3.2 90B Vision?
Can I use Llama 3.2 90B Vision commercially?
What's the context length of Llama 3.2 90B Vision?
Does Llama 3.2 90B Vision support images?
Source: huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Llama 3.2 90B Vision runs on your specific hardware before committing money.