Fits comfortably
Running Qwen 2.5 Coder 7B Instruct on NVIDIA GeForce RTX 3080 16GB (Mobile)
NVIDIA GeForce RTX 3080 16GB (Mobile) runs Qwen 2.5 Coder 7B Instruct comfortably at Q6_K with 8 GB of headroom for context.
Model size
7B params
Qwen 2.5 Coder 7B Instruct →Memory available
Recommended quant
Q6_K
Highest quality that fits
Quick start with Ollama
1. Install
ollama pull qwen2.5-coder:7b2. Run
ollama run qwen2.5-coder:7bDefault quant in Ollama is Q4_K_M. To use a different quant, append it: qwen2.5-coder:7b-q5_K_M.
Variants and what fits
| Quantization | File size | VRAM required | Fits on NVIDIA GeForce RTX 3080 16GB (Mobile)? |
|---|---|---|---|
| Q4_K_M | 4.7 GB | 6 GB | Yes |
| Q6_K | 6.3 GB | 8 GB | Yes |
Real benchmarks
| Tool | Quant | Context | tok/s | VRAM used | Source |
|---|---|---|---|---|---|
| Ollama | Q4_K_M | 8,192 | 79.4 tok/s | — | owner |
Frequently asked
Can NVIDIA GeForce RTX 3080 16GB (Mobile) run Qwen 2.5 Coder 7B Instruct?
NVIDIA GeForce RTX 3080 16GB (Mobile) runs Qwen 2.5 Coder 7B Instruct comfortably at Q6_K with 8 GB of headroom for context.
What quantization should I use?
Q6_K is the highest-quality variant of Qwen 2.5 Coder 7B Instruct that fits in 16 GB VRAM. Lower-bit quants will be smaller but lose some quality.
How fast will it be?
Measured at 79.4 tok/s on this combination in our testing.
See also: Qwen 2.5 Coder 7B Instruct, NVIDIA GeForce RTX 3080 16GB (Mobile), all benchmarks.
Reviewed by RunLocalAI Editorial. See our editorial policy.