RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /NVIDIA RTX 2080 Ti 22GB (China-mod)
UNIT · NVIDIA · GPU
22 GB VRAMmid·Reviewed May 2026

NVIDIA RTX 2080 Ti 22GB (China-mod)

Chinese third-party modification of the stock RTX 2080 Ti, replacing the 11 GB GDDR6 with 22 GB. The TU102 chip, 352-bit memory bus, and 616 GB/s bandwidth are unchanged — only VRAM density doubles. The community uses these cards in multi-card stacks (8× cards × 22 GB = 176 GB on a single chassis for an under-$3,000 budget). Trade-off: Turing architecture is two generations behind; no FP8 support, no transformer-engine kernels, AWQ-INT4 inference works but expect lower tok/s than a modern card per GB of VRAM.

Released 2023·~$350 street·616 GB/s memory bandwidth
RUNLOCALAI SCORE
See full leaderboard →
405/ 1000
CC-tier
Estimated
Throughput
214/ 500
VRAM-fit
140/ 200
Ecosystem
200/ 200
Efficiency
24/ 100

Extrapolated from 616 GB/s bandwidth — 73.9 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Workable at 32B, comfortable at 14B and below — snappy enough for a coding agent; vision models supported.

7B chat✓
Comfortable
14B chat✓
Comfortable
32B chat~
Tight
70B chat✗
Doesn't fit
Coding agent✓
Comfortable
Vision (≤8B VLM)✓
Comfortable
Long context (32K)✓
Comfortable
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · OVERVIEW

Overview

Chinese third-party modification of the stock RTX 2080 Ti, replacing the 11 GB GDDR6 with 22 GB. The TU102 chip, 352-bit memory bus, and 616 GB/s bandwidth are unchanged — only VRAM density doubles. The community uses these cards in multi-card stacks (8× cards × 22 GB = 176 GB on a single chassis for an under-$3,000 budget). Trade-off: Turing architecture is two generations behind; no FP8 support, no transformer-engine kernels, AWQ-INT4 inference works but expect lower tok/s than a modern card per GB of VRAM. Same caveats as any aftermarket mod: no NVIDIA warranty, sourcing from Taobao requires shipping logistics, kernel maturity for old architectures has thinned over time.

Retailers we'd check:Amazon

Search-fallback link — editorial hasn't yet curated a retailer URL for this card. Approx. $350.

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM22 GB
Power draw250 W
Released2023
Backends
CUDA
Vulkan

Models that fit

Open-weight models small enough to run on NVIDIA RTX 2080 Ti 22GB (China-mod) with usable context.

Llama 3.1 8B Instruct
8B · llama
Qwen 3 8B
8B · qwen
Llama 3.2 3B Instruct
3B · llama
Qwen 2.5 7B Instruct
7B · qwen
DeepSeek R1 Distill Qwen 7B
7B · deepseek
Hermes 3 Llama 3.1 8B
8B · hermes
Mistral Nemo 12B Instruct
12B · mistral
Gemma 4 E4B (Effective 4B)
4B · gemma

Frequently asked

What models can NVIDIA RTX 2080 Ti 22GB (China-mod) run?

With 22GB VRAM, the NVIDIA RTX 2080 Ti 22GB (China-mod) runs models up to ~32B in 4-bit, with room for context. See the model list below for tested combinations.

Does NVIDIA RTX 2080 Ti 22GB (China-mod) support CUDA?

Yes — NVIDIA RTX 2080 Ti 22GB (China-mod) is an NVIDIA card with full CUDA support, the most mature local-AI backend. llama.cpp, Ollama, vLLM, and ExLlamaV2 all run natively.

How much does NVIDIA RTX 2080 Ti 22GB (China-mod) cost?

Current street price for NVIDIA RTX 2080 Ti 22GB (China-mod) is around $350. Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.