llama
11B parameters
Commercial OK
Multimodal

Llama 3.2 11B Vision Instruct

First-party multimodal Llama. Accepts images alongside text for VQA, document understanding, and chart reading. Runs on 12GB+ VRAM.

License: Llama 3.2 Community License·Released Sep 25, 2024·Context: 131,072 tokens

Overview

First-party multimodal Llama. Accepts images alongside text for VQA, document understanding, and chart reading. Runs on 12GB+ VRAM.

Strengths

  • Strong vision-language baseline
  • Document and chart understanding

Weaknesses

  • EU restricted by license
  • Higher VRAM than text-only 8B

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M7.9 GB11 GB
Q8_012.5 GB16 GB

Get the model

Ollama

One-line install

ollama run llama3.2-vision:11bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Llama 3.2 11B Vision Instruct.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Llama 3.2 11B Vision Instruct?

11GB of VRAM is enough to run Llama 3.2 11B Vision Instruct at the Q4_K_M quantization (file size 7.9 GB). Higher-quality quantizations need more.

Can I use Llama 3.2 11B Vision Instruct commercially?

Yes — Llama 3.2 11B Vision Instruct ships under the Llama 3.2 Community License, which permits commercial use. Always read the license text before deployment.

What's the context length of Llama 3.2 11B Vision Instruct?

Llama 3.2 11B Vision Instruct supports a context window of 131,072 tokens (about 131K).

How do I install Llama 3.2 11B Vision Instruct with Ollama?

Run `ollama pull llama3.2-vision:11b` to download, then `ollama run llama3.2-vision:11b` to start a chat session. The default quantization is Q4_K_M.

Does Llama 3.2 11B Vision Instruct support images?

Yes — Llama 3.2 11B Vision Instruct is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.