WSL GPU not detected — get NVIDIA visible inside WSL2
WSL2 doesn't pass the GPU through unless the host driver is right and the kernel is current. Here's the install order that actually works in 2026, and how to confirm passthrough is live before you waste an afternoon.
Diagnostic order — most likely first
Host Windows NVIDIA driver missing or out-of-date
Inside WSL: `nvidia-smi` returns 'command not found' or 'failed to communicate.' On the Windows host: open Device Manager → check NVIDIA card has no warning triangle. Driver version older than 535 means no WSL passthrough.
Install or update the Windows NVIDIA driver from nvidia.com (Game Ready or Studio, both ship CUDA-on-WSL support since 535). Reboot. Don't install NVIDIA drivers inside WSL — that breaks passthrough.
WSL kernel too old
`wsl --version` in PowerShell shows kernel < 5.15.x. GPU passthrough requires the inbox kernel from Windows 11 22H2+ or WSL 2.0+.
Run `wsl --update` in PowerShell as admin. Reboot WSL with `wsl --shutdown` then re-launch your distro.
CUDA toolkit installed inside WSL is conflicting with passthrough
User installed `nvidia-cuda-toolkit` from apt — this drops a Linux NVIDIA driver that fights with the Windows passthrough.
Remove the apt package: `sudo apt remove --purge nvidia-driver-* nvidia-utils-*`. Install only `cuda-toolkit-12-x` from NVIDIA's WSL-specific repo. The Windows host driver provides the GPU; WSL gets only the toolkit.
Docker Desktop running with WSL2 backend not updated
Docker can't see GPU even though `nvidia-smi` works in plain WSL.
Update Docker Desktop to 4.30+. Enable Settings → Resources → WSL Integration. Add the `--gpus all` flag when running containers (or the equivalent compose `deploy.resources.reservations.devices`).
Card genuinely doesn't support WSL passthrough
Pascal-era cards (GTX 10-series and older). Some workstation cards in odd configs. This is rare in 2026 but real.
If your card predates Turing (RTX 20-series), WSL2 GPU support is limited or absent. Native Linux dual-boot is the alternative. Or upgrade.
Frequently asked questions
Does WSL2 GPU performance match native Linux?
Yes, within 1-3% on most inference workloads in 2026. The passthrough overhead is negligible. The wins of WSL (file sharing with Windows, easier OS workflow) typically outweigh the tiny perf delta.
Do I need CUDA installed twice (Windows + WSL)?
No. Install the Windows NVIDIA driver only on Windows. Inside WSL, install the CUDA toolkit (compiler + libraries) but NOT the driver — the driver comes via passthrough from Windows. Confusing but correct.
Will an AMD GPU work the same way in WSL?
ROCm WSL support is officially limited as of 2026. Some forks claim functional setups for 7900-series cards but this is fragile. If AMD WSL support is a hard requirement, verify before buying — or run native Linux.
Related troubleshooting
Docker doesn't expose the host GPU by default. The NVIDIA Container Toolkit is the bridge. Here's the install + the runtime config + the four common symptoms that mean it's misconfigured.
Why CUDA OOM happens during local LLM inference and image gen, how to confirm the real cause, and the four real fixes (smaller quant, shorter context, gradient checkpointing, or more VRAM).
PyTorch falsely reporting no CUDA is the most common Python ML setup failure. The cause is almost always: wrong PyTorch wheel for your CUDA version, or a CPU-only build accidentally installed.
When the fix is hardware
A surprising fraction of troubleshooting tickets resolve to: this card doesn't have enough VRAM for what you're asking it to do. If you're hitting OOM after every reasonable fix, or your GPU genuinely can't fit the model you need, it's upgrade time: