AMD Radeon RX 6800
16 GB RDNA 2 — the AMD answer to the 4070 12 GB question. Comfortable for any 7B/13B model + 32B Q4 with offload. ROCm officially supported. ~70-90 tok/s on 7B Q4. The card to consider when VRAM matters more than raw inference speed.
AMD Radeon RX 6800
Affiliate disclosure: as an Amazon Associate and partner of other retailers, we earn from qualifying purchases. The verdict on this page is our editorial opinion; affiliate links never influence what we recommend.
Extrapolated from 512 GB/s bandwidth — 51.2 tok/s estimated. No measured benchmarks yet.
Plain-English: Comfortable at 14B and below — snappy enough for a coding agent.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who needs 16 GB of VRAM for local inference on a budget, and is willing to trade raw speed for capacity. It comfortably runs 7B and 13B models at Q4 with headroom, and can offload 32B Q4 models entirely—something a 12 GB card cannot. Expect ~70-90 tok/s on 7B Q4, ~35-45 tok/s on 13B Q4, and ~15-20 tok/s on 32B Q4, based on its 512 GB/s bandwidth. The 16 GB VRAM also allows running larger quantizations like 7B Q8 or 13B Q6 without offloading. What breaks: ROCm support is official but less polished than CUDA; some operators report occasional driver issues or missing features like Flash Attention. Training workloads are slower than equivalent Nvidia cards. When to pass: if peak inference speed on smaller models is the priority, a 4070 or 3080 12 GB will be faster. Also pass if you need absolute software compatibility or plan to run 70B models—16 GB is insufficient for those. At ~$380 used, this is the best VRAM-per-dollar card for local AI, beating the 4060 Ti 16 GB on bandwidth and price.
›Why this rating
The RX 6800 offers excellent VRAM capacity and bandwidth for its price, making it a top choice for budget-conscious operators running 13B-32B models. The rating is slightly lowered due to ROCm's less mature ecosystem compared to CUDA, and slower performance on training tasks.
Overview
16 GB RDNA 2 — the AMD answer to the 4070 12 GB question. Comfortable for any 7B/13B model + 32B Q4 with offload. ROCm officially supported. ~70-90 tok/s on 7B Q4. The card to consider when VRAM matters more than raw inference speed.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 16 GB |
| Power draw | 250 W |
| Released | 2020 |
| MSRP | $579 |
| Backends | ROCm Vulkan |
Models that fit
Open-weight models small enough to run on AMD Radeon RX 6800 with usable context.
Frequently asked
What models can AMD Radeon RX 6800 run?
Does AMD Radeon RX 6800 support CUDA?
How much does AMD Radeon RX 6800 cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.