Docker: could not select device driver "" with capabilities: [[gpu]]
Cause
Docker is trying to launch a container with --gpus all but doesn't have the NVIDIA Container Toolkit installed (or it's not configured into the Docker daemon). Without the toolkit, Docker has no driver to expose GPUs to containers.
Solution
1. Install the NVIDIA Container Toolkit:
# Ubuntu / Debian
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update && sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
2. Verify with a test container:
docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi
Should print the same nvidia-smi output as the host.
3. Docker Desktop on Windows / macOS: GPU pass-through requires WSL2 backend on Windows; not supported on macOS at all (use the host directly for GPU work).
4. For Ollama/vLLM in containers, use the official images that bundle CUDA:
docker run -d --gpus all -p 11434:11434 ollama/ollama
docker run --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface vllm/vllm-openai:latest
Related errors
Did this fix it?
If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.