RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /NVIDIA GeForce RTX 2080 Ti
UNIT · NVIDIA · GPU
11 GB VRAMenthusiast·Reviewed May 2026

NVIDIA GeForce RTX 2080 Ti

Turing flagship. 11 GB GDDR6 at 616 GB/s — fits 13B Q4 comfortably, 7B Q4 at ~110-140 tok/s. Used $360-420 in 2026 makes it the 'enthusiast on a budget' floor; competes with used 3060 12 GB on raw VRAM and beats it on compute.

Released 2018·~$380 street·616 GB/s memory bandwidth
▼ CHECK CURRENT PRICE· 1 retailer

NVIDIA GeForce RTX 2080 Ti

Check on Amazon→

Affiliate disclosure: as an Amazon Associate and partner of other retailers, we earn from qualifying purchases. The verdict on this page is our editorial opinion; affiliate links never influence what we recommend.

RUNLOCALAI SCORE
See full leaderboard →
363/ 1000
CC-tier
Estimated
Throughput
214/ 500
VRAM-fit
80/ 200
Ecosystem
200/ 200
Efficiency
24/ 100

Extrapolated from 616 GB/s bandwidth — 73.9 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Best for 7B; 14B is tight — coding agent feels deliberate; vision models supported.

7B chat✓
Comfortable
14B chat~
Tight
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent~
Tight
Vision (≤8B VLM)✓
Comfortable
Long context (32K)~
Tight
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
6.6/10

This card is for the operator who needs 13B-class models at home and refuses to pay scalper prices for a 3090. The 2080 Ti runs 7B Q4 at ~110-140 tok/s and 13B Q4 at ~60-80 tok/s, both comfortably above interactive thresholds. 11 GB VRAM fits 13B Q4 with room for context, and 34B Q4 is possible with aggressive quantization (e.g., Q3_K_M) at ~20-30 tok/s. What breaks: 11 GB is tight for 34B Q4 with long context, and Turing lacks official support for FP8 or newer attention kernels, so some inference frameworks may underperform. Pass on this card if you need 70B models or want native FP8 support; a used 3090 or 4070 Ti Super adds headroom. At $380 used, this is the cheapest entry to 13B-class local AI, but expect to tinker with quantization and context limits.

›Why this rating

The 2080 Ti offers exceptional value for 13B models at its used price, but its 11 GB VRAM and lack of modern features (FP8, sparse attention) limit it compared to newer cards. It's a strong budget enthusiast pick, not a top-tier workhorse.

BLK · OVERVIEW

Overview

Turing flagship. 11 GB GDDR6 at 616 GB/s — fits 13B Q4 comfortably, 7B Q4 at ~110-140 tok/s. Used $360-420 in 2026 makes it the 'enthusiast on a budget' floor; competes with used 3060 12 GB on raw VRAM and beats it on compute.

Retailers we'd check:Amazon

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM11 GB
Power draw250 W
Released2018
MSRP$1199
Backends
CUDA
Vulkan

Models that fit

Open-weight models small enough to run on NVIDIA GeForce RTX 2080 Ti with usable context.

Llama 3.2 3B Instruct
3B · llama
Gemma 4 E4B (Effective 4B)
4B · gemma
Qwen 3 4B
4B · qwen
Phi-3.5 Mini Instruct
3.8B · phi
Llama 3.2 1B Instruct
1B · llama
Gemma 3 4B
4B · gemma
Gemma 4 E2B (Effective 2B)
2B · gemma
Phi-3.5 Vision
4.2B · phi

Frequently asked

What models can NVIDIA GeForce RTX 2080 Ti run?

With 11GB VRAM, the NVIDIA GeForce RTX 2080 Ti runs models up to 14B in 4-bit, or 7B at higher quantizations. See the model list below for tested combinations.

Does NVIDIA GeForce RTX 2080 Ti support CUDA?

Yes — NVIDIA GeForce RTX 2080 Ti is an NVIDIA card with full CUDA support, the most mature local-AI backend. llama.cpp, Ollama, vLLM, and ExLlamaV2 all run natively.

How much does NVIDIA GeForce RTX 2080 Ti cost?

Current street price for NVIDIA GeForce RTX 2080 Ti is around $380 (MSRP $1199). Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.

Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • AMD Radeon RX 6800 XT
    amd · 16 GB VRAM
    7.3/10
  • AMD Radeon RX 6800
    amd · 16 GB VRAM
    7.3/10
  • AMD Radeon RX 6900 XT
    amd · 16 GB VRAM
    7.3/10
  • AMD Radeon RX 6950 XT
    amd · 16 GB VRAM
    7.6/10
  • NVIDIA GeForce RTX 3070 Ti
    nvidia · 8 GB VRAM
    5.0/10
  • Intel Arc B580
    intel · 12 GB VRAM
    6.3/10
Step up
More VRAM — bigger models, more context
  • AMD Radeon RX 6800 XT
    amd · 16 GB VRAM
    7.3/10
  • NVIDIA GeForce RTX 4080 Super
    nvidia · 16 GB VRAM
    8.1/10
  • Intel Arc A770 16GB
    intel · 16 GB VRAM
    6.5/10
Step down
Less VRAM — cheaper, more constrained
  • AMD Radeon RX 6800
    amd · 16 GB VRAM
    7.3/10
  • NVIDIA GeForce RTX 3070 Ti
    nvidia · 8 GB VRAM
    5.0/10
  • Intel Arc B580
    intel · 12 GB VRAM
    6.3/10