RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / AMD Instinct MI300A (APU)

What can AMD Instinct MI300A (APU) run?

Build: AMD Instinct MI300A (APU) + — + 32 GB RAM (windows)

Memory: 128 GB VRAM + 32 GB system RAM
Runner: llama.cpp (CPU only)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
160 models

Full-VRAM resident, with room for context. No compromises.

#1Gemma 3 1B
1B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 9.8 GBHeadroom: 118.2 GB
ollama run gemma3:1b
1118
tok/s
E
Weights
0.60 GB
KV cache
0.50 GB
Activations
8.22 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#2Llama 3.2 1B Instruct
1B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 10.3 GBHeadroom: 117.7 GB
ollama run llama3.2:1b
635
tok/s
E
Weights
1.06 GB
KV cache
0.50 GB
Activations
8.25 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#3Gemma 4 E2B (Effective 2B)
2B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 11.9 GBHeadroom: 116.1 GB
ollama run gemma4:e2b
318
tok/s
E
Weights
2.13 GB
KV cache
1.00 GB
Activations
8.30 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#4Llama 3.2 3B Instruct
3B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 13.5 GBHeadroom: 114.5 GB
ollama run llama3.2:3b
212
tok/s
E
Weights
3.19 GB
KV cache
1.50 GB
Activations
8.35 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#5Phi-3.5 Vision
4.2B
phi
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.5 GBHeadroom: 114.5 GB
266
tok/s
E
Weights
2.54 GB
KV cache
2.10 GB
Activations
8.32 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#6Phi-3.5 Mini Instruct
3.8B
phi
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 14.8 GBHeadroom: 113.2 GB
ollama run phi3.5:3.8b
167
tok/s
E
Weights
4.04 GB
KV cache
1.90 GB
Activations
8.39 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#7Gemma 4 E4B (Effective 4B)
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 15.2 GBHeadroom: 112.8 GB
ollama run gemma4:e4b
159
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#8Qwen 3 4B
4B
qwen
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 15.2 GBHeadroom: 112.8 GB
ollama run qwen3:4b
159
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#9Gemma 3 4B
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 15.2 GBHeadroom: 112.8 GB
ollama run gemma3:4b
159
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#10Llama 3.1 Nemotron Nano 8B
8B
llama
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.8 GBHeadroom: 110.2 GB
140
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#11Mistral 7B Instruct v0.3
7B
mistral
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 17.2 GBHeadroom: 110.8 GB
ollama run mistral:7b
140
tok/s
E
Weights
4.81 GB
KV cache
3.50 GB
Activations
8.43 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
#12CodeGemma 7B
7B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 16.6 GBHeadroom: 111.4 GB
ollama run codegemma:7b
160
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →

Runs with tradeoffs
2 models

Tight VRAM, partial CPU offload, or context-limited.

Command R+ 104B
104B
command-r
Quant: Q4_K_MContext: 8,192VRAM: 126.6 GBHeadroom: 1.4 GB
  • • Tight VRAM fit — only 1.4 GB headroom left for context growth
ollama run command-r-plus:104b
11
tok/s
E
Weights
62.79 GB
KV cache
52.00 GB
Activations
11.33 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →
Command R+ (Aug 2024)
104B
command-r
Quant: AWQ-INT4Context: 2,048VRAM: 124.7 GBHeadroom: 3.3 GB
  • • Tight VRAM fit — only 3.3 GB headroom left for context growth
6
tok/s
E
Weights
104.00 GB
KV cache
13.00 GB
Activations
7.25 GB
Runtime
0.50 GB
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 4 new tradeoff

  • • GLM-5
  • • Command R+ 104B
  • • DBRX Instruct
  • • Command R+ (Aug 2024)
Shop this upgrade↗

Upgrade to AMD Instinct MI300X

see current pricing

192 GB VRAM (vs your 128 GB) plus a bandwidth jump from ~? GB/s to ~5325 GB/s.

Unlocks: 8 new comfortable

  • • Command R+ 104B
  • • GLM-5
  • • Qwen 3 235B-A22B
  • • DeepSeek Coder V2 236B
Shop this upgrade↗

Add a second AMD Instinct MI300A (APU)

see current pricing

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 11 new comfortable

  • • DeepSeek V4 Flash (284B MoE)
  • • Command R+ 104B
  • • GLM-5
  • • Qwen 3 235B-A22B
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 5 popular models

Need more memory than you have. Shown for orientation.

DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3 235B-A22B
235B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.

—
DeepSeek R1 (671B reasoning)
671B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.

—
DeepSeek V4 Flash (284B MoE)
284B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.