Fits comfortably
Running Llama 3.1 8B Instruct on AMD Radeon RX 7900 XTX
AMD Radeon RX 7900 XTX runs Llama 3.1 8B Instruct comfortably at FP16 with 6 GB of headroom for context.
Model size
8B params
Llama 3.1 8B Instruct →Memory available
Recommended quant
FP16
Highest quality that fits
Quick start with Ollama
1. Install
ollama pull llama3.1:8b2. Run
ollama run llama3.1:8bDefault quant in Ollama is Q4_K_M. To use a different quant, append it: llama3.1:8b-q5_K_M.
Variants and what fits
| Quantization | File size | VRAM required | Fits on AMD Radeon RX 7900 XTX? |
|---|---|---|---|
| Q4_K_M | 4.9 GB | 6 GB | Yes |
| Q5_K_M | 5.7 GB | 7 GB | Yes |
| Q8_0 | 8.5 GB | 10 GB | Yes |
| FP16 | 16.1 GB | 18 GB | Yes |
Real benchmarks
| Tool | Quant | Context | tok/s | VRAM used | Source |
|---|---|---|---|---|---|
| Ollama | Q4_K_M | 8,192 | 86.4 tok/s | 5.6 GB | community |
Frequently asked
Can AMD Radeon RX 7900 XTX run Llama 3.1 8B Instruct?
AMD Radeon RX 7900 XTX runs Llama 3.1 8B Instruct comfortably at FP16 with 6 GB of headroom for context.
What quantization should I use?
FP16 is the highest-quality variant of Llama 3.1 8B Instruct that fits in 24 GB VRAM. Lower-bit quants will be smaller but lose some quality.
How fast will it be?
Measured at 86.4 tok/s on this combination (community-sourced).
See also: Llama 3.1 8B Instruct, AMD Radeon RX 7900 XTX, all benchmarks.
Reviewed by RunLocalAI Editorial. See our editorial policy.