RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Families/Vision-Language/LLaVA
Vision-Language
Open-weight
Apache 2.0 (mostly)

LLaVA

by Microsoft Research + community

The pioneering open-weight VLM family. LLaVA-1.5, LLaVA-Next, LLaVA-OneVision. Established the VLM training recipe that Qwen-VL + InternVL refined.

Best entry point for local use

Start with LLaVA 1.6 13B at Q4_K_M via Ollama — fits on single RTX 3060 12GB at 9 GB VRAM (7B text + ~600 MB vision encoder + projector). LLaVA 1.6 (formerly LLaVA-NeXT) is the reference open-weight vision-language model — it pioneered the CLIP-vision-encoder + Llama-language-backbone architecture that every subsequent VLM has adopted. The 13B variant using Vicuna-13B as backbone delivers solid general VQA and image description. For higher quality, LLaVA 1.6 34B Q4 (22 GB) fits on RTX 4090 24 GB. Skip LLaVA 1.5 — the 1.6 release adds dynamic high-resolution input (AnyRes) which is essential for document/chart understanding. However, InternVL2 now outperforms LLaVA on every VQA benchmark — LLaVA's primary advantage is simpler architecture and broader runtime support. Apache 2.0 license.

Deployment guidance

For single-user local: Ollama + llava:13b Q4_K_M on RTX 3060 12GB. Ollama's LLaVA support is the most mature VLM deployment path — ollama run llava:13b downloads both the LLM backbone and CLIP vision encoder with correctly configured projector weights. For multi-user serving: vLLM 0.6.1+ with LLaVA multimodal backend on L40S 48 GB — handles 200 concurrent VQA requests. The CLIP vision encoder (ViT-L/14) is lightweight (430 MB FP32) and should be kept in GPU memory alongside the text backbone — don't offload it. For document understanding pipelines: deploy LLaVA with tiled AnyRes preprocessing (max 4 tiles for 13B, 6 tiles for 34B) — each tile is a separate vision-encoder forward pass, adding ~100ms per tile on RTX 4090. For image-only workloads that don't need text generation, skip LLaVA and use the CLIP vision encoder directly via Transformers.

Recommended runtimes

llama.cpp

Related families

Qwen-VLInternVL

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 4090 →
  • RTX 4090 vs RTX 5090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — vision-language needs →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Runtimes that fit
  • llama.cpp →
Alternatives
Qwen-VLInternVL
Before you buy

Verify LLaVA runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →