Llama 3.1 8B Instruct on NVIDIA GeForce RTX 5080
Measured this month.
Measurement
- tok/s
- 132.2
- TTFT
- 123 ms
- VRAM used
- —
- RAM used
- —
- Power
- —
- Quant
- Q4_K_M
- Context
- 4K
- Run date
- 2026-05-11
- Source
- owner
V36.52 rigor detail
Protocol →- Cold-start decode
- 131.99 tok/sTTFT 124 ms
- Steady-state median
- 132.20 tok/sP5 131.6 · P95 132.9
- Runs captured
- 5 · reproduced ✓
- Scenario
- Single-stream
5-run capture · variance 1.1% · scenario single-stream · runtime ollama
Why this confidence tier?
Confidence is rule-based. Every factor below contributed to the tier. We never expose a single numeric score; the tier label is auditable through this explanation alone.
- +Measured by RunLocalAI editorial
- +Marked reproduced by editorial
- Read the confidence methodology →Full editorial standards for tiering.
- Why we don't use percentages →Tier labels — auditable, no opaque score.
Cohort intelligence
How this measurement compares to the rest of the corpus. Only comparable rows (same model + hardware first, with relaxations labelled) are used. We never average across runtimes or quant formats unless explicitly told to.
Same model + hardware, different runtime
1 matching rowVariance here is pure runtime / version drift. Wide spread suggests a runtime regression candidate worth investigating.
- 118.2 tok/srtx-5080Q4_K_MEditorial
Same model, different hardware
7 matching rowsWhat this model looks like on adjacent hardware. Drives the 'should I upgrade?' question.
- 150.0 tok/srtx-4090Q4_K_MEditorial
- 105.0 tok/srtx-3090Q4_K_MEditorial
- 86.4 tok/srx-7900-xtxQ4_K_MEditorial
- 78.5 tok/sapple-m4-maxMLX-4bitEditorial
- 78.5 tok/sapple-m4-maxMLX-4bitEditorial
- +2 more
Reproduce this benchmark
Got the same model + hardware combo? Run the same measurement and submit your numbers. We'll pre-fill model, hardware, quant, and context — you just add your tok/s, VRAM, runtime version. If your numbers match within ±15%, this benchmark gets a confidence lift and a reproduction badge.
Related
Drill into the entity pages for this measurement.
Cite or export
Reference this benchmark in your work. Multiple formats; CC-BY attribution required.
Cite this benchmark or paste it into a README. Copy-to-clipboard; license is CC-BY-4.0 (attribution to RunLocalAI required).
<a href="https://runlocalai.co/benchmarks/338" rel="noopener">RunLocalAI: Llama 3.1 8B Instruct on NVIDIA GeForce RTX 5080 — 132.2 tok/s</a>
Next recommended step
Got the same model + hardware? Run it and submit your numbers — successful reproductions lift this benchmark's confidence tier.