RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /AMD Radeon RX 6800
UNIT · AMD · GPU
16 GB VRAMhigh·Reviewed May 2026

AMD Radeon RX 6800

16 GB RDNA 2 — the AMD answer to the 4070 12 GB question. Comfortable for any 7B/13B model + 32B Q4 with offload. ROCm officially supported. ~70-90 tok/s on 7B Q4. The card to consider when VRAM matters more than raw inference speed.

Released 2020·~$380 street·512 GB/s memory bandwidth
▼ CHECK CURRENT PRICE· 1 retailer

AMD Radeon RX 6800

Check on Amazon→

Affiliate disclosure: as an Amazon Associate and partner of other retailers, we earn from qualifying purchases. The verdict on this page is our editorial opinion; affiliate links never influence what we recommend.

RUNLOCALAI SCORE
See full leaderboard →
304/ 1000
CC-tier
Estimated
Throughput
148/ 500
VRAM-fit
140/ 200
Ecosystem
130/ 200
Efficiency
16/ 100

Extrapolated from 512 GB/s bandwidth — 51.2 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Comfortable at 14B and below — snappy enough for a coding agent.

7B chat✓
Comfortable
14B chat✓
Comfortable
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent✓
Comfortable
Vision (≤8B VLM)~
Tight
Long context (32K)✓
Comfortable
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
7.3/10

This card is for the operator who needs 16 GB of VRAM for local inference on a budget, and is willing to trade raw speed for capacity. It comfortably runs 7B and 13B models at Q4 with headroom, and can offload 32B Q4 models entirely—something a 12 GB card cannot. Expect ~70-90 tok/s on 7B Q4, ~35-45 tok/s on 13B Q4, and ~15-20 tok/s on 32B Q4, based on its 512 GB/s bandwidth. The 16 GB VRAM also allows running larger quantizations like 7B Q8 or 13B Q6 without offloading. What breaks: ROCm support is official but less polished than CUDA; some operators report occasional driver issues or missing features like Flash Attention. Training workloads are slower than equivalent Nvidia cards. When to pass: if peak inference speed on smaller models is the priority, a 4070 or 3080 12 GB will be faster. Also pass if you need absolute software compatibility or plan to run 70B models—16 GB is insufficient for those. At ~$380 used, this is the best VRAM-per-dollar card for local AI, beating the 4060 Ti 16 GB on bandwidth and price.

›Why this rating

The RX 6800 offers excellent VRAM capacity and bandwidth for its price, making it a top choice for budget-conscious operators running 13B-32B models. The rating is slightly lowered due to ROCm's less mature ecosystem compared to CUDA, and slower performance on training tasks.

BLK · OVERVIEW

Overview

16 GB RDNA 2 — the AMD answer to the 4070 12 GB question. Comfortable for any 7B/13B model + 32B Q4 with offload. ROCm officially supported. ~70-90 tok/s on 7B Q4. The card to consider when VRAM matters more than raw inference speed.

Retailers we'd check:Amazon

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM16 GB
Power draw250 W
Released2020
MSRP$579
Backends
ROCm
Vulkan

Models that fit

Open-weight models small enough to run on AMD Radeon RX 6800 with usable context.

Llama 3.1 8B Instruct
8B · llama
Qwen 3 8B
8B · qwen
Llama 3.2 3B Instruct
3B · llama
Qwen 2.5 7B Instruct
7B · qwen
DeepSeek R1 Distill Qwen 7B
7B · deepseek
Hermes 3 Llama 3.1 8B
8B · hermes
Gemma 4 E4B (Effective 4B)
4B · gemma
Qwen 3 4B
4B · qwen

Frequently asked

What models can AMD Radeon RX 6800 run?

With 16GB VRAM, the AMD Radeon RX 6800 runs models up to 14B in 4-bit, or 7B at higher quantizations. See the model list below for tested combinations.

Does AMD Radeon RX 6800 support CUDA?

No — AMD Radeon RX 6800 is an AMD card. Use ROCm (Linux) or the Vulkan backend in llama.cpp instead. CUDA-only tools won't work.

How much does AMD Radeon RX 6800 cost?

Current street price for AMD Radeon RX 6800 is around $380 (MSRP $579). Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.

Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • NVIDIA GeForce RTX 2080 Super
    nvidia · 8 GB VRAM
    5.1/10
  • NVIDIA GeForce RTX 3070 Ti
    nvidia · 8 GB VRAM
    5.0/10
  • NVIDIA GeForce RTX 2080 Ti
    nvidia · 11 GB VRAM
    6.6/10
  • NVIDIA GeForce RTX 2070 Super
    nvidia · 8 GB VRAM
    4.8/10
  • AMD Radeon RX 6800 XT
    amd · 16 GB VRAM
    7.3/10
  • Intel Arc A770 16GB
    intel · 16 GB VRAM
    6.5/10
Step up
More VRAM — bigger models, more context
  • NVIDIA GeForce RTX 2080 Ti
    nvidia · 11 GB VRAM
    6.6/10
  • AMD Radeon RX 6800 XT
    amd · 16 GB VRAM
    7.3/10
  • AMD Radeon RX 6900 XT
    amd · 16 GB VRAM
    7.3/10
Step down
Less VRAM — cheaper, more constrained
  • NVIDIA GeForce RTX 2080 Super
    nvidia · 8 GB VRAM
    5.1/10
  • AMD Radeon RX 6750 XT
    amd · 12 GB VRAM
    7.1/10
  • Intel Arc A770 16GB
    intel · 16 GB VRAM
    6.5/10