Fits comfortably

Running Llama 3.1 8B Instruct on NVIDIA GeForce RTX 5080

NVIDIA GeForce RTX 5080 runs Llama 3.1 8B Instruct comfortably at Q8_0 with 6 GB of headroom for context.

By Fredoline Eruo·Last verified May 6, 2026

Model size

Memory available

Recommended quant

Q8_0
Highest quality that fits

Quick start with Ollama

1. Install
ollama pull llama3.1:8b
2. Run
ollama run llama3.1:8b

Default quant in Ollama is Q4_K_M. To use a different quant, append it: llama3.1:8b-q5_K_M.

Variants and what fits

QuantizationFile sizeVRAM requiredFits on NVIDIA GeForce RTX 5080?
Q4_K_M4.9 GB6 GB
Yes
Q5_K_M5.7 GB7 GB
Yes
Q8_08.5 GB10 GB
Yes
FP1616.1 GB18 GB
No

Real benchmarks

ToolQuantContexttok/sVRAM usedSource
OllamaQ4_K_M8,192118.2 tok/s5.4 GB
community

Frequently asked

Can NVIDIA GeForce RTX 5080 run Llama 3.1 8B Instruct?

NVIDIA GeForce RTX 5080 runs Llama 3.1 8B Instruct comfortably at Q8_0 with 6 GB of headroom for context.

What quantization should I use?

Q8_0 is the highest-quality variant of Llama 3.1 8B Instruct that fits in 16 GB VRAM. Lower-bit quants will be smaller but lose some quality.

How fast will it be?

Measured at 118.2 tok/s on this combination (community-sourced).

See also: Llama 3.1 8B Instruct, NVIDIA GeForce RTX 5080, all benchmarks.

Reviewed by RunLocalAI Editorial. See our editorial policy.