RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Errors / Configuration / Docker: could not select device driver "" with capabilities: [[gpu]]
Configuration
Verified by owner

Docker: could not select device driver "" with capabilities: [[gpu]]

could not select device driver "" with capabilities: [[gpu]]
By Fredoline Eruo · Last verified May 8, 2026

Cause

Docker is trying to launch a container with --gpus all but doesn't have the NVIDIA Container Toolkit installed (or it's not configured into the Docker daemon). Without the toolkit, Docker has no driver to expose GPUs to containers.

Solution

1. Install the NVIDIA Container Toolkit:

# Ubuntu / Debian
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
  sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update && sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

2. Verify with a test container:

docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi

Should print the same nvidia-smi output as the host.

3. Docker Desktop on Windows / macOS: GPU pass-through requires WSL2 backend on Windows; not supported on macOS at all (use the host directly for GPU work).

4. For Ollama/vLLM in containers, use the official images that bundle CUDA:

docker run -d --gpus all -p 11434:11434 ollama/ollama
docker run --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface vllm/vllm-openai:latest

Related errors

  • Ollama: bind: address already in use (port 11434)
  • Ollama: Error: model 'X' not found
  • Ollama truncates input — default context length is only 2048
  • Ollama: connection refused on localhost:11434
  • Token generation slows as conversation gets longer

Did this fix it?

If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.