RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Hardware
  4. /AMD Radeon 880M (Strix Point iGPU)
UNIT · AMD · GPU
entry·Reviewed May 2026

AMD Radeon 880M (Strix Point iGPU)

AMD's 880M iGPU (Ryzen AI 300 series Strix Point). RDNA 3.5 with LPDDR5x-7500 unified memory — bandwidth jump from 780M (89 → 102 GB/s). ~8-15 tok/s on 7B Q4. Pairs with the dedicated XDNA 2 NPU for hybrid CPU+NPU+iGPU inference experiments.

Released 2024·102 GB/s memory bandwidth
RUNLOCALAI SCORE
See full leaderboard →
132/ 1000
DD-tier
Estimated
Throughput
30/ 500
VRAM-fit
0/ 200
Ecosystem
130/ 200
Efficiency
29/ 100

Extrapolated from 102 GB/s bandwidth — 10.2 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Doesn't fit modern chat models usefully — vision models won't fit.

7B chat✗
Doesn't fit
14B chat✗
Doesn't fit
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent✗
Doesn't fit
Vision (≤8B VLM)✗
Doesn't fit
Long context (32K)✗
Doesn't fit
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
2.4/10

The AMD Radeon 880M is for the operator who needs a unified memory laptop that can run small local models without a discrete GPU, and who values power efficiency over throughput. This iGPU handles 7B Q4 models at 8-15 tok/s, enough for interactive chat with acceptable latency, and can manage 3B models at ~20-30 tok/s. The 102 GB/s bandwidth and shared system memory mean larger models like 13B Q4 (9 GB) will run at 5-8 tok/s, and anything above 13B is impractical due to VRAM constraints. ROCm support is present but experimental on iGPUs, so expect driver quirks. Pass on this card if you need to run 13B+ models at usable speeds, or if you require stable, well-documented GPU compute—a used RTX 3060 12GB will outperform it significantly. As an iGPU, the 880M has no standalone price; its value is tied to the laptop's total cost, making it a decent choice for portable inference but not a dedicated AI workhorse.

›Why this rating

The 880M offers a unique low-power unified memory solution for small models, but its limited bandwidth and VRAM cap its usefulness to entry-level workloads. It earns a middling score because it fills a niche for ultraportable inference but is easily outperformed by cheap discrete GPUs.

BLK · OVERVIEW

Overview

AMD's 880M iGPU (Ryzen AI 300 series Strix Point). RDNA 3.5 with LPDDR5x-7500 unified memory — bandwidth jump from 780M (89 → 102 GB/s). ~8-15 tok/s on 7B Q4. Pairs with the dedicated XDNA 2 NPU for hybrid CPU+NPU+iGPU inference experiments.

BLK · SPECS

Specs

VRAM0 GB
Power draw28 W
Released2024
Backends
ROCm
Vulkan

Frequently asked

Does AMD Radeon 880M (Strix Point iGPU) support CUDA?

No — AMD Radeon 880M (Strix Point iGPU) is an AMD card. Use ROCm (Linux) or the Vulkan backend in llama.cpp instead. CUDA-only tools won't work.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.

RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • NVIDIA GeForce GTX 1050 Ti
    nvidia · 4 GB VRAM
    1.3/10
  • NVIDIA GeForce GTX 1650
    nvidia · 4 GB VRAM
    1.8/10
  • AMD Radeon 780M (Phoenix iGPU)
    amd · 0 GB VRAM
    2.1/10
  • NVIDIA GeForce RTX 3050
    nvidia · 8 GB VRAM
    5.3/10
  • NVIDIA GeForce GTX 1060 6GB
    nvidia · 6 GB VRAM
    2.6/10
  • NVIDIA GeForce GTX 1060 3GB
    nvidia · 3 GB VRAM
    1.1/10
Step up
More VRAM — bigger models, more context
  • NVIDIA GeForce GTX 1660
    nvidia · 6 GB VRAM
    2.8/10
  • NVIDIA GeForce GTX 1070 Ti
    nvidia · 8 GB VRAM
    5.1/10
  • NVIDIA GeForce GTX 1070
    nvidia · 8 GB VRAM
    4.6/10
Step down
Less VRAM — cheaper, more constrained
No verdicted hardware in the next tier down yet.