RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Models
  4. /Llama 3.2 90B Vision
llama
90B parameters
Commercial OK
Multimodal
·Reviewed May 2026

Llama 3.2 90B Vision

Llama 3.2 multimodal at 90B. Datacenter-tier predecessor to Llama 4 Maverick. Strong visual reasoning.

License: Llama Community License·Released Sep 25, 2024·Context: 131,072 tokens

Overview

Llama 3.2 multimodal at 90B. Datacenter-tier predecessor to Llama 4 Maverick. Strong visual reasoning.

How to run it

Llama 3.2 90B Vision is Meta's base multimodal model — 90B dense text backbone with vision encoder, no instruction tuning. Same architecture as the instruct variant but no chat template — generates completions, not responses. Run at Q4_K_M via llama.cpp with llava-server for vision, or text-only with -ngl 999 -fa -c 4096. Q4_K_M file size ~51 GB (text) + ~3-5 GB (vision projector). Minimum VRAM: 48 GB — RTX A6000 at Q3_K_M for vision, or text-only Q4_K_M. Recommended: A100 80GB for full Q4_K_M + vision. This is a base model — use for fine-tuning, few-shot completion, or as a vision backbone. Not suitable for direct chat without instruction tuning. For chat, use the instruct variant. For serving: vLLM multimodal pipeline (if supported for base vision models — verify). Few-shot prompting with image+text pairs is the primary base-model interaction pattern.

Hardware guidance

Minimum: RTX A6000 48GB at Q4_K_M text-only, or Q3_K_M with vision. Recommended: A100 80GB at AWQ-INT4 for vision serving. Budget: dual RTX 3090 48 GB at Q4_K_M text-only, or Q3_K_M + vision. VRAM math: identical to instruct variant — 90B text at Q4 ≈ 51 GB, vision projector ~3-5 GB, KV cache at 8K ~15 GB. Total with vision at 8K: ~69-71 GB. A100 80GB: comfortable. Dual RTX 3090 48GB: must reduce context or quant. Mac Studio M4 Ultra 128GB: Q4_K_M + vision, 3-6 tok/s. RTX 5090 32GB: text-only Q4_K_M with KV offload. Cloud: single A100 at $5-10/hr. Base variant may not be on Ollama — verify hf repo for GGUF availability.

What breaks first

  1. Base model, not chat. Generates completions in whatever style the prompt sets. Feeding a conversational prompt produces unpredictable continuations unless formatted as few-shot with completion markers. 2. Vision projector mismatch. Base vision GGUFs require a matching mmproj file. Mixing instruct mmproj with base text GGUF produces garbled vision outputs. 3. Few-shot context contamination. Base models are sensitive to prompt formatting. Extra whitespace or inconsistent few-shot formatting can degrade output quality dramatically. 4. Fine-tuning data format. Fine-tuning this model requires multimodal dataset formatting (text + image interleaving). Standard text-only fine-tuning scripts may not handle image tokens correctly.

Runtime recommendation

llama.cpp with llava-server for local base model vision use. vLLM for production. For fine-tuning: Axolotl with multimodal support, or Meta's reference training code. Avoid Ollama for base vision models — Ollama is designed for instruct/chat models.

Common beginner mistakes

Mistake: Chatting with the base model and wondering why responses are nonsensical. Fix: Base models complete text, they don't follow instructions. Use few-shot completion format or switch to the instruct variant. Mistake: Using the instruct mmproj with base text GGUF. Fix: Vision projector must match the model variant. Download the matching mmproj from the same hf repo. Mistake: Assuming base and instruct have identical image understanding quality. Fix: Instruction tuning affects vision understanding. The base model's raw vision representations may differ from instruct. Test your task on both. Mistake: Fine-tuning without multimodal data format. Fix: The vision encoder expects image tokens in a specific format. Use a multimodal fine-tuning framework, not standard text-only scripts.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
Llama 3.2 11B Vision11B
Consumer
Family siblings (llama-3.2-vision)
Llama 3.2 11B Vision11B
Consumer
Llama 3.2 90B Vision90B
You are here

Strengths

  • Frontier-tier multimodal
  • Strong visual reasoning

Weaknesses

  • 64GB+ VRAM tier

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
AWQ-INT452.0 GB64 GB

Get the model

HuggingFace

Original weights

huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Llama 3.2 90B Vision.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
Intel Gaudi 3
128GB · intel

Frequently asked

What's the minimum VRAM to run Llama 3.2 90B Vision?

64GB of VRAM is enough to run Llama 3.2 90B Vision at the AWQ-INT4 quantization (file size 52.0 GB). Higher-quality quantizations need more.

Can I use Llama 3.2 90B Vision commercially?

Yes — Llama 3.2 90B Vision ships under the Llama Community License, which permits commercial use. Always read the license text before deployment.

What's the context length of Llama 3.2 90B Vision?

Llama 3.2 90B Vision supports a context window of 131,072 tokens (about 131K).

Does Llama 3.2 90B Vision support images?

Yes — Llama 3.2 90B Vision is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • Dual 3090 vs RTX 5090 (48 GB or 32 GB) →
  • RTX 3090 vs RTX 4090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — what 70B-class models need →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
Llama 3.2 11B Vision
Before you buy

Verify Llama 3.2 90B Vision runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10
  • Qwen 2.5 72B Instruct
    qwen · 72B
    9.0/10
  • Llama 3.1 70B Instruct
    llama · 70B
    8.0/10
Step up
More capable — bigger memory footprint
  • DeepSeek V4 Pro (1.6T MoE)
    deepseek · 1600B
    unrated
  • Qwen 3.5 235B-A17B (MoE)
    qwen · 397B
    unrated
Step down
Smaller — faster, runs on weaker hardware
  • Qwen 3 30B-A3B
    qwen · 30B
    unrated
  • Gemma 4 31B Dense
    gemma · 31B
    unrated