RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Errors / Configuration / Slow tokens/sec on capable GPU (silent CPU fallback)
Configuration
Verified by owner

Slow tokens/sec on capable GPU (silent CPU fallback)

(no error — output is correct but tok/s is 5-10× slower than expected)
By Fredoline Eruo · Last verified May 8, 2026

Cause

The model loaded but is running on CPU instead of GPU. Common in Python: pip install torch without specifying CUDA picks up the CPU-only wheel; in Ollama: OLLAMA_NO_GPU=1 was set; in llama.cpp: -ngl 0 was passed (or omitted on a build that doesn't auto-offload).

Symptom: nvidia-smi shows 0% GPU utilization while inference runs. Per-token latency is dominated by CPU memory bandwidth, which is ~10× slower than GPU VRAM.

Solution

1. Confirm the GPU is being used:

# Watch GPU usage while a prompt runs
watch -n 0.5 nvidia-smi

If utilization stays at 0%, you're on CPU.

2. PyTorch — check the install:

import torch
print(torch.cuda.is_available(), torch.version.cuda)

If False / None, reinstall with the CUDA index:

pip uninstall torch torchvision -y
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

3. llama.cpp — set GPU layers:

# Offload all layers to GPU; -1 = all
./llama-cli -m model.gguf -ngl 99 -p "test"

Watch the loader output — it should say offloaded N/N layers to GPU.

4. Ollama — check env:

env | grep OLLAMA
# Unset any blockers
unset OLLAMA_NO_GPU
ollama serve

5. Confirm you built with the right backend. A llama.cpp built without GGML_CUDA=1 silently runs on CPU even with -ngl set. Rebuild with the correct backend.

Related errors

  • Ollama: bind: address already in use (port 11434)
  • Ollama: Error: model 'X' not found
  • Ollama truncates input — default context length is only 2048
  • Ollama: connection refused on localhost:11434
  • Token generation slows as conversation gets longer

Did this fix it?

If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.