GPTQ
GPTQ (Generative Pre-trained Transformer Quantization) is a one-shot post-training quantization method that uses approximate second-order information (the inverse Hessian) to choose quantization values that minimize per-layer reconstruction error. Predates AWQ; was the dominant 4-bit format from 2023-2024 before AWQ overtook it for most NVIDIA serving.
In 2026, GPTQ is still relevant where: (1) the model only ships with GPTQ quants and AWQ isn't available, (2) you're running ExLlamaV2 (which uses GPTQ-derived EXL2 quants natively), (3) you need 3-bit quantization (more aggressive than AWQ's 4-bit floor). For new production deployments on vLLM, AWQ is generally the better choice; GPTQ is supported but the kernel optimization receives less attention.
Operator-honest comparison: GPTQ tends to ~3-5% quality loss vs FP16; AWQ targets ~2%. On throughput, AWQ is typically 5-15% faster on vLLM. On model coverage, GPTQ has a longer tail of available checkpoints because it was the first widely-adopted 4-bit method. For NVIDIA-only single-stream throughput (consumer cards), ExLlamaV2 with EXL2 quants (GPTQ-lineage) often beats AWQ — but lacks the production-serving stack vLLM provides.
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.