Qwen 2.5-VL 72B
Qwen 2.5 vision-language flagship at 72B. Strong on document understanding + multi-image queries. Apache 2.0.
Overview
Qwen 2.5 vision-language flagship at 72B. Strong on document understanding + multi-image queries. Apache 2.0.
How to run it
Qwen 2.5 VL 72B is Alibaba's 72B vision-language model — 72B dense text backbone + Qwen-VL vision encoder. Run at Q4_K_M via llama.cpp with llava-server for vision, or Ollama if the VL tag is available. Q4_K_M file size ~41 GB (text) + ~4-6 GB (vision). Minimum VRAM: 48 GB — RTX A6000 at Q3_K_M with vision, or text-only Q4_K_M. Recommended: A100 80GB at AWQ-INT4 for full vision serving. Throughput: ~10-18 tok/s on A6000 at Q4_K_M text-only; vision encoding adds ~2-4s per image. Qwen-VL's vision encoder is well-optimized — lower VRAM overhead than InternVL's InternViT. Strong OCR, document understanding, and visual reasoning. 128K context advertised; practical for vision at Q4 on 80 GB is 4-8K. Qwen 2.5 VL uses the same text backbone as Qwen 2.5 72B — broad ecosystem support. For production: vLLM with multimodal pipeline and tensor-parallel if needed.
Hardware guidance
Minimum: RTX A6000 48GB at Q3_K_M + vision (tight). Recommended: A100 80GB at AWQ-INT4. VRAM math: 72B text at Q4_K_M ≈ 41 GB. Qwen-VL encoder: 3-5 GB. KV cache at 8K: ~10 GB. Total with vision: ~54-56 GB. A6000 48GB: must use Q3_K_M (31 GB) + vision at 4K context. A100 80GB: comfortable for Q4 + vision + 8K. Dual RTX 4090 48 GB: Q4_K_M text-only, or Q3_K_M + vision. Mac Studio M4 Ultra 128GB: Q4_K_M + vision, 3-6 tok/s. Cloud: A100 at $5-10/hr. Qwen-VL's encoder is smaller than InternViT — better VRAM efficiency for vision. AWQ-INT4 drops text weights to ~36 GB, enabling 16K+ context on 80 GB.
What breaks first
- Vision tag availability in Ollama. Qwen 2.5 VL may not have an official Ollama tag. Community tags may exist but aren't verified. Test with raw llama.cpp if Ollama fails. 2. Image preprocessing mismatch. Qwen-VL expects specific image preprocessing (resolution, normalization). Feeding raw images without preprocessing degrades vision quality. Use the model's image processor from HF transformers. 3. KV cache with vision. Vision tokens are prepended to the text prompt — each image adds 256-1024 tokens to context. Multiple images inflate context and KV cache proportionally. Budget for image tokens. 4. Qwen 2.5 vs Qwen 3 VL. Qwen 2.5 VL and Qwen 3 VL use different vision encoders. Don't mix model files or chat templates between versions.
Runtime recommendation
Common beginner mistakes
Mistake: Using a standard text-only GGUF and expecting vision to work. Fix: Vision requires a multimodal GGUF with the Qwen-VL encoder included. Download from bartowski or convert from hf. Mistake: Ignoring image token count in context budget. Fix: Each image in Qwen-VL consumes 256-1024 tokens. Subtract image tokens from your available context window. Mistake: Using Llama 3.2 Vision chat template for Qwen-VL. Fix: Different architectures, different templates. Use Qwen's chat template from tokenizer_config.json. Mistake: Sending large images without preprocessing. Fix: Qwen-VL expects images within a specific resolution range. Use Qwen's image processor or resize manually before inference.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Frontier-tier multimodal
- Apache 2.0
- Strong document Q&A
Weaknesses
- 48GB+ VRAM tier
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 42.0 GB | 48 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 2.5-VL 72B.
Frequently asked
What's the minimum VRAM to run Qwen 2.5-VL 72B?
Can I use Qwen 2.5-VL 72B commercially?
What's the context length of Qwen 2.5-VL 72B?
Does Qwen 2.5-VL 72B support images?
Source: huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Qwen 2.5-VL 72B runs on your specific hardware before committing money.