Tight fit
Running DeepSeek R1 Distill Qwen 32B on NVIDIA GeForce RTX 4090
DeepSeek R1 Distill Qwen 32B fits at Q4_K_M, but headroom is tight (0 GB). Reduce context or use a smaller quant for safety.
Model size
32B params
DeepSeek R1 Distill Qwen 32B →Memory available
Recommended quant
Q4_K_M
Highest quality that fits
Quick start with Ollama
1. Install
ollama pull deepseek-r1:32b2. Run
ollama run deepseek-r1:32bDefault quant in Ollama is Q4_K_M. To use a different quant, append it: deepseek-r1:32b-q5_K_M.
Variants and what fits
| Quantization | File size | VRAM required | Fits on NVIDIA GeForce RTX 4090? |
|---|---|---|---|
| Q4_K_M | 19.0 GB | 24 GB | Yes |
| Q8_0 | 34.0 GB | 40 GB | No |
Real benchmarks
| Tool | Quant | Context | tok/s | VRAM used | Source |
|---|---|---|---|---|---|
| vLLM | AWQ-INT4 | 32,768 | 32.5 tok/s | 22.4 GB | community |
Frequently asked
Can NVIDIA GeForce RTX 4090 run DeepSeek R1 Distill Qwen 32B?
DeepSeek R1 Distill Qwen 32B fits at Q4_K_M, but headroom is tight (0 GB). Reduce context or use a smaller quant for safety.
What quantization should I use?
Q4_K_M is the highest-quality variant of DeepSeek R1 Distill Qwen 32B that fits in 24 GB VRAM. Lower-bit quants will be smaller but lose some quality.
How fast will it be?
Measured at 32.5 tok/s on this combination (community-sourced).
See also: DeepSeek R1 Distill Qwen 32B, NVIDIA GeForce RTX 4090, all benchmarks.
Reviewed by RunLocalAI Editorial. See our editorial policy.