RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Tasks/3D/NeRF (Neural Radiance Fields)
3D
neural radiance field

NeRF (Neural Radiance Fields)

Volumetric scene representation from posed images. Largely superseded by Gaussian Splatting for real-time use but still relevant for research.

Setup walkthrough

  1. pip install nerfstudio (the standard open-source NeRF training framework — wraps multiple NeRF variants).
  2. Capture a scene: 50-200 photos of a static scene from different angles. Use a phone camera. Orbit around the subject while keeping 60-80% overlap.
  3. Run COLMAP for camera pose estimation: ns-process-data images --data dataset/input_images/ --output-dir dataset/processed/ (processes photos, runs COLMAP, outputs transforms.json).
  4. Train a NeRF: ns-train nerfacto --data dataset/processed/ (Nerfacto is the default fast NeRF variant). Training takes 10-30 minutes on RTX GPU for 100 photos.
  5. View the result: ns-viewer --load-config outputs/processed/nerfacto/config.yml → opens a web viewer at localhost:6006. Navigate the 3D scene in real-time.
  6. Render novel views: ns-render camera-path --load-config outputs/processed/nerfacto/config.yml --camera-path-filename camera_path.json --output-path renders/
  7. Reality: Gaussian Splatting (see /tasks/gaussian-splatting) has largely superseded NeRF for real-time rendering. NeRF is still relevant for research, volumetric effects (fog, smoke), and scenarios where splats struggle (transparent objects, reflections).

The cheap setup

Used RTX 3060 12 GB (~$200-250, see /hardware/rtx-3060-12gb). Trains Nerfacto on 100 photos in 15-30 minutes. Renders novel views at 5-10 fps after training (not real-time — this is why Gaussian Splatting won). For research/learning: perfectly usable. For production real-time rendering: use Gaussian Splatting instead. Pair with Ryzen 5 5600 + 32 GB DDR4 + 1TB NVMe. Total: ~$390-440. NeRF training is more VRAM-efficient than Gaussian Splatting (NeRF is a small MLP; splats store millions of explicit Gaussians). A 12 GB card handles larger scenes in NeRF than in GS.

The serious setup

Used RTX 3090 24 GB ($700-900, see /hardware/rtx-3090). Trains Nerfacto on 500+ photos in 20-40 minutes. The 24 GB VRAM handles high-resolution scenes with complex view-dependent effects (reflections, refractions, transparency) that Gaussian Splatting struggles with. For research and VFX (where rendering quality matters more than real-time speed): NeRF on 24 GB is the gold standard. Total: ~$1,800-2,200. For production rendering: RTX 4090 ($2,000) renders novel views at 20-30 fps after training. NeRF is quality-over-speed; GS is speed-over-quality. Choose based on your use case.

Common beginner mistake

The mistake: Training a NeRF on outdoor photos taken over 30 minutes, then getting a blurry, ghosted reconstruction. Why it fails: NeRF assumes a static scene. Outdoor lighting changes over 30 minutes (sun moves, clouds pass, shadows shift). Each photo has different illumination. The model tries to reconcile contradicting pixel colors — was that wall bright (direct sun) or dark (cloud shadow)? It averages them → blur. The fix: Capture photos within 2-5 minutes, ideally on overcast days (diffuse, consistent lighting). For sunny days: shoot within 1-2 minutes or accept artifacts in shadow regions. For dynamic scenes (people, cars), use robust NeRF variants that model transient objects (NeRF-W, RobustNeRF). Lighting consistency is the hidden requirement of photogrammetry. Same lighting across all photos = sharp reconstruction. Moving sun = blur.

Recommended setup for nerf (neural radiance fields)

Recommended hardware
Best GPU for local AI →
All workloads ranked across VRAM tiers.
Recommended runtimes

Browse all tools for runtimes that fit this workload.

Budget build
AI PC under $1,000 →
Best GPU for this task
Best GPU for local AI →

Reality check

Local AI workloads have real hardware constraints that vary by task type. VRAM ceiling decides what model fits; bandwidth decides decode speed; compute decides prefill speed. Pick the GPU tier that fits your actual workload, not the spec sheet.

Common mistakes

  • Buying for spec-sheet VRAM without modeling KV cache + activation overhead
  • Underestimating quantization quality loss below Q4
  • Skipping flash-attention support (real perf gap on long context)
  • Ignoring sustained-load thermals (laptops thermal-throttle within 30 min)

What breaks first

The errors most operators hit when running nerf (neural radiance fields) locally. Each links to a diagnose+fix walkthrough.

  • CUDA out of memory →
  • Model keeps crashing →
  • Ollama running slow →
  • llama.cpp too slow →

Before you buy

Verify your specific hardware can handle nerf (neural radiance fields) before committing money.

  • Will it run on my hardware? →
  • Custom compatibility check →
  • GPU recommender (4 questions) →

Related tasks

Gaussian Splatting
Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
  • Will it run on my hardware? →
Compare hardware
  • Curated head-to-heads →
  • Custom comparison tool →
  • RTX 4090 vs RTX 5090 →
  • RTX 3090 vs RTX 4090 →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Specialized buyer guides
  • GPU for ComfyUI (image-gen) →
  • GPU for KoboldCpp (RP/long-context) →
  • GPU for AI agents →
  • GPU for local OCR →
  • GPU for voice cloning →
  • Upgrade from RTX 3060 →
  • Beginner setup →
  • AI PC for students →
Updated 2026 roundup
  • Best free local AI tools (2026) →