Tight fit

Running Qwen 2.5 Coder 32B Instruct on NVIDIA GeForce RTX 4090

Qwen 2.5 Coder 32B Instruct fits at Q4_K_M, but headroom is tight (0 GB). Reduce context or use a smaller quant for safety.

By Fredoline Eruo·Last verified May 6, 2026

Memory available

Recommended quant

Q4_K_M
Highest quality that fits

Quick start with Ollama

1. Install
ollama pull qwen2.5-coder:32b
2. Run
ollama run qwen2.5-coder:32b

Default quant in Ollama is Q4_K_M. To use a different quant, append it: qwen2.5-coder:32b-q5_K_M.

Variants and what fits

QuantizationFile sizeVRAM requiredFits on NVIDIA GeForce RTX 4090?
Q4_K_M19.0 GB24 GB
Yes
Q8_034.0 GB40 GB
No

Real benchmarks

ToolQuantContexttok/sVRAM usedSource
vLLMAWQ-INT432,76838.2 tok/s21.8 GB
community
vLLMAWQ-INT432,76838.2 tok/s21.8 GB
community

Frequently asked

Can NVIDIA GeForce RTX 4090 run Qwen 2.5 Coder 32B Instruct?

Qwen 2.5 Coder 32B Instruct fits at Q4_K_M, but headroom is tight (0 GB). Reduce context or use a smaller quant for safety.

What quantization should I use?

Q4_K_M is the highest-quality variant of Qwen 2.5 Coder 32B Instruct that fits in 24 GB VRAM. Lower-bit quants will be smaller but lose some quality.

How fast will it be?

Measured at 38.2 tok/s on this combination (community-sourced).

See also: Qwen 2.5 Coder 32B Instruct, NVIDIA GeForce RTX 4090, all benchmarks.

Reviewed by RunLocalAI Editorial. See our editorial policy.