RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /AMD Radeon RX 580 8GB
UNIT · AMD · GPU
8 GB VRAMmid·Reviewed May 2026

AMD Radeon RX 580 8GB

AMD Polaris with 8 GB VRAM. Cheap on used market ($70-100) but Polaris was dropped from ROCm in 2022, so AMD's official AI runtimes won't work. Vulkan via llama.cpp is the only practical path; performance is bandwidth-limited at ~10-20 tok/s on 7B Q4. Mostly relevant for 'I already own one' operators, not buy-this-for-AI.

Released 2017·~$80 street·256 GB/s memory bandwidth
RUNLOCALAI SCORE
See full leaderboard →
172/ 1000
DD-tier
Estimated
Throughput
74/ 500
VRAM-fit
80/ 200
Ecosystem
80/ 200
Efficiency
11/ 100

Extrapolated from 256 GB/s bandwidth — 25.6 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Edge-of-fit for 7B; expect compromises.

7B chat~
Tight
14B chat✗
Doesn't fit
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent✗
Doesn't fit
Vision (≤8B VLM)△
Marginal
Long context (32K)✗
Doesn't fit
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
3.8/10

This card is for the operator who already has one in a drawer and wants to experiment with local inference on a tight budget. It is not a card to buy specifically for AI work.

On 7B Q4 models, expect ~10-20 tok/s via Vulkan in llama.cpp — usable for chat but not real-time. 13B Q4 models run at ~5-10 tok/s, and 8 GB VRAM limits to 13B Q4 or 7B Q8. No room for larger models or context beyond 4K.

The ROCm stack dropped Polaris in 2022, so Vulkan is the only path. No CUDA, no official AMD AI tools. Expect rough edges: no flash attention, no speculative decoding, and potential driver quirks on Linux.

Pass if you are buying a GPU for AI. For $80-100, a used RTX 3060 12GB delivers 2-3x the performance and full CUDA support. This card only makes sense if it is already in the rig and the budget is zero.

At $80 used, the price is tempting, but the performance ceiling and software limitations make it a poor investment for serious local AI work.

›Why this rating

The RX 580 is usable for small models at modest speeds, but the lack of ROCm, CUDA, and modern features cripples its utility. The rating reflects its value only as a free or near-free entry point, not as a purposeful AI purchase.

BLK · OVERVIEW

Overview

AMD Polaris with 8 GB VRAM. Cheap on used market ($70-100) but Polaris was dropped from ROCm in 2022, so AMD's official AI runtimes won't work. Vulkan via llama.cpp is the only practical path; performance is bandwidth-limited at ~10-20 tok/s on 7B Q4. Mostly relevant for 'I already own one' operators, not buy-this-for-AI.

Retailers we'd check:Amazon

Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $80.

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM8 GB
Power draw185 W
Released2017
MSRP$229
Backends
Vulkan

Models that fit

Open-weight models small enough to run on AMD Radeon RX 580 8GB with usable context.

Llama 3.2 3B Instruct
3B · llama
Gemma 4 E4B (Effective 4B)
4B · gemma
Qwen 3 4B
4B · qwen
Phi-3.5 Mini Instruct
3.8B · phi
Llama 3.2 1B Instruct
1B · llama
Gemma 3 4B
4B · gemma
Gemma 4 E2B (Effective 2B)
2B · gemma
Phi-3.5 Vision
4.2B · phi

Frequently asked

What models can AMD Radeon RX 580 8GB run?

With 8GB VRAM, the AMD Radeon RX 580 8GB runs 7B models comfortably in Q4 quantization. See the model list below for tested combinations.

Does AMD Radeon RX 580 8GB support CUDA?

AMD Radeon RX 580 8GB does not support CUDA. Use Vulkan-compatible tools (llama.cpp Vulkan backend) or check vendor-specific runtimes.

How much does AMD Radeon RX 580 8GB cost?

Current street price for AMD Radeon RX 580 8GB is around $80 (MSRP $229). Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.

Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • NVIDIA GeForce GTX 1070
    nvidia · 8 GB VRAM
    4.6/10
  • NVIDIA GeForce GTX 1070 Ti
    nvidia · 8 GB VRAM
    5.1/10
  • NVIDIA GeForce GTX 1080
    nvidia · 8 GB VRAM
    4.6/10
  • NVIDIA GeForce GTX 1660 Ti
    nvidia · 6 GB VRAM
    2.8/10
  • AMD Radeon RX 5500 XT 8GB
    amd · 8 GB VRAM
    3.5/10
  • Intel Arc B570
    intel · 10 GB VRAM
    5.8/10
Step up
More VRAM — bigger models, more context
  • NVIDIA GeForce GTX 1070
    nvidia · 8 GB VRAM
    4.6/10
  • AMD Radeon RX 6600 XT
    amd · 8 GB VRAM
    4.8/10
  • Intel Arc B570
    intel · 10 GB VRAM
    5.8/10
Step down
Less VRAM — cheaper, more constrained
  • NVIDIA GeForce GTX 1660 Ti
    nvidia · 6 GB VRAM
    2.8/10
  • NVIDIA GeForce GTX 1660
    nvidia · 6 GB VRAM
    2.8/10
  • AMD Radeon RX 5500 XT 8GB
    amd · 8 GB VRAM
    3.5/10