Slow tokens/sec on capable GPU (silent CPU fallback)
Cause
The model loaded but is running on CPU instead of GPU. Common in Python: pip install torch without specifying CUDA picks up the CPU-only wheel; in Ollama: OLLAMA_NO_GPU=1 was set; in llama.cpp: -ngl 0 was passed (or omitted on a build that doesn't auto-offload).
Symptom: nvidia-smi shows 0% GPU utilization while inference runs. Per-token latency is dominated by CPU memory bandwidth, which is ~10× slower than GPU VRAM.
Solution
1. Confirm the GPU is being used:
# Watch GPU usage while a prompt runs
watch -n 0.5 nvidia-smi
If utilization stays at 0%, you're on CPU.
2. PyTorch — check the install:
import torch
print(torch.cuda.is_available(), torch.version.cuda)
If False / None, reinstall with the CUDA index:
pip uninstall torch torchvision -y
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124
3. llama.cpp — set GPU layers:
# Offload all layers to GPU; -1 = all
./llama-cli -m model.gguf -ngl 99 -p "test"
Watch the loader output — it should say offloaded N/N layers to GPU.
4. Ollama — check env:
env | grep OLLAMA
# Unset any blockers
unset OLLAMA_NO_GPU
ollama serve
5. Confirm you built with the right backend. A llama.cpp built without GGML_CUDA=1 silently runs on CPU even with -ngl set. Rebuild with the correct backend.
Related errors
Did this fix it?
If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.