VRAM (Video RAM)
VRAM is the dedicated memory on a GPU. For local AI, VRAM capacity is the single most important spec — it determines which models you can load. The relationship between model size, quantization, and VRAM is the central calculation behind every "will this run" question.
Rules of thumb: a model in FP16 needs about 2GB per billion parameters; in Q4 about 0.6 GB per billion. Add 15-30% overhead for KV cache, activation memory, and runtime buffers. A 7B model in Q4 fits comfortably in 8 GB VRAM; 70B Q4 needs 48 GB; 70B FP16 needs 140 GB.
Important: VRAM is gated, not just slow. If a model spills into system RAM via CPU offload, generation drops from 40 tok/s to 2-3 tok/s — a usability cliff. Apple Silicon's unified memory bypasses this distinction, treating all RAM as VRAM, which is why M-series Macs punch above their weight for local LLMs.
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.