RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Errors / Driver issues / WSL2: nvidia-smi works but PyTorch sees no CUDA / libcuda.so missing
Driver issues
Verified by owner

WSL2: nvidia-smi works but PyTorch sees no CUDA / libcuda.so missing

OSError: libcuda.so.1: cannot open shared object file: No such file or directory
By Fredoline Eruo · Last verified May 8, 2026

Cause

WSL2 inherits the NVIDIA driver from the Windows host through a special mount (/usr/lib/wsl/lib). When that mount is missing, broken, or shadowed by a Linux-side libcuda installation, PyTorch can't find the driver library even though nvidia-smi (which uses a different path) works.

A common cause: someone ran apt install nvidia-driver-XXX inside WSL2, which is wrong — it installs Linux driver bits that conflict with the WSL2 host pass-through.

Solution

1. Confirm the WSL2 driver mount is intact:

ls -la /usr/lib/wsl/lib/libcuda*
# Should show libcuda.so.1.1 and libcuda.so symlinks

2. If you installed Linux NVIDIA drivers inside WSL, remove them:

sudo apt purge -y 'nvidia-*' 'libnvidia-*'
sudo apt autoremove

Reboot the WSL distro:

# in Windows PowerShell
wsl --shutdown

3. Update the Windows host driver to a recent version (R535+ for full WSL2 CUDA support). Reboot Windows after.

4. Update WSL itself:

wsl --update

5. Add the WSL lib path explicitly if PyTorch still can't find it:

export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
python -c "import torch; print(torch.cuda.is_available())"  # True

6. Install the CUDA Toolkit (not driver) inside WSL only if you need nvcc for building:

sudo apt install cuda-toolkit-12-4

Toolkit ≠ driver; the toolkit is safe to install in WSL.

Related errors

  • CUDA driver version is insufficient for CUDA runtime version
  • nvidia-smi: command not found
  • PyTorch CUDA error: driver version is insufficient for CUDA runtime
  • WSL2 GPU not detected — nvidia-smi missing or empty
  • Docker container can't see GPU — nvidia-container-toolkit missing

Did this fix it?

If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.