EXL2
EXL2 is the ExLlamaV2 quantization format. NVIDIA-only, single-stream-throughput-optimized. Allows fractional bit-rates (e.g. 4.65 bits per weight) by mixing higher-precision weights for "important" channels with lower-precision for the rest. Used through ExLlamaV2 + TabbyAPI as the OpenAI-compatible serving wrapper.
The EXL2 case operationally: the fastest single-stream tok/s on consumer NVIDIA for most models. Beats vLLM AWQ by 10-25% on per-stream throughput at the same VRAM target. Compatibility cost: EXL2 doesn't support continuous batching the way vLLM does, so multi-tenant concurrency is weaker. EXL2 doesn't run on AMD or Apple.
When to use EXL2: solo-user or small-team consumer-NVIDIA deployments where peak per-stream tok/s matters more than concurrency. The dual-3090 NVLink + ExLlamaV2 combo is one of the highest-throughput single-stream setups under $2,000. When NOT to use EXL2: production serving with multi-user concurrency (use vLLM/SGLang with AWQ instead), AMD/Apple targets, or workloads where you need the broader quant ecosystem (GGUF for portability).
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.