Quantization
Quantization is the process of reducing a model's numeric precision to shrink its memory footprint with minimal quality loss. A 70B-parameter model in FP16 takes 140 GB; in Q4_K_M (a 4-bit GGUF format) it takes about 40 GB and runs on a single RTX 4090.
Common formats for local inference: GGUF (the llama.cpp format, with K-quants like Q4_K_M, Q5_K_M, Q8_0), EXL2 (NVIDIA-only, used by ExLlamaV2), AWQ and GPTQ (NVIDIA-only, post-training methods), and MLX (Apple Silicon).
Important non-obvious detail: Q4_K_M isn't really 4 bits — it uses 6-bit precision on attention and feed-forward layers and 4-bit elsewhere, averaging about 4.83 bits/parameter. This is why naive "params × 4 / 8" sizing under-predicts actual file size by ~20%.
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.