Volumetric scene representation from posed images. Largely superseded by Gaussian Splatting for real-time use but still relevant for research.
pip install nerfstudio (the standard open-source NeRF training framework — wraps multiple NeRF variants).ns-process-data images --data dataset/input_images/ --output-dir dataset/processed/ (processes photos, runs COLMAP, outputs transforms.json).ns-train nerfacto --data dataset/processed/ (Nerfacto is the default fast NeRF variant). Training takes 10-30 minutes on RTX GPU for 100 photos.ns-viewer --load-config outputs/processed/nerfacto/config.yml → opens a web viewer at localhost:6006. Navigate the 3D scene in real-time.ns-render camera-path --load-config outputs/processed/nerfacto/config.yml --camera-path-filename camera_path.json --output-path renders/Used RTX 3060 12 GB (~$200-250, see /hardware/rtx-3060-12gb). Trains Nerfacto on 100 photos in 15-30 minutes. Renders novel views at 5-10 fps after training (not real-time — this is why Gaussian Splatting won). For research/learning: perfectly usable. For production real-time rendering: use Gaussian Splatting instead. Pair with Ryzen 5 5600 + 32 GB DDR4 + 1TB NVMe. Total: ~$390-440. NeRF training is more VRAM-efficient than Gaussian Splatting (NeRF is a small MLP; splats store millions of explicit Gaussians). A 12 GB card handles larger scenes in NeRF than in GS.
Used RTX 3090 24 GB ($700-900, see /hardware/rtx-3090). Trains Nerfacto on 500+ photos in 20-40 minutes. The 24 GB VRAM handles high-resolution scenes with complex view-dependent effects (reflections, refractions, transparency) that Gaussian Splatting struggles with. For research and VFX (where rendering quality matters more than real-time speed): NeRF on 24 GB is the gold standard. Total: ~$1,800-2,200. For production rendering: RTX 4090 ($2,000) renders novel views at 20-30 fps after training. NeRF is quality-over-speed; GS is speed-over-quality. Choose based on your use case.
The mistake: Training a NeRF on outdoor photos taken over 30 minutes, then getting a blurry, ghosted reconstruction. Why it fails: NeRF assumes a static scene. Outdoor lighting changes over 30 minutes (sun moves, clouds pass, shadows shift). Each photo has different illumination. The model tries to reconcile contradicting pixel colors — was that wall bright (direct sun) or dark (cloud shadow)? It averages them → blur. The fix: Capture photos within 2-5 minutes, ideally on overcast days (diffuse, consistent lighting). For sunny days: shoot within 1-2 minutes or accept artifacts in shadow regions. For dynamic scenes (people, cars), use robust NeRF variants that model transient objects (NeRF-W, RobustNeRF). Lighting consistency is the hidden requirement of photogrammetry. Same lighting across all photos = sharp reconstruction. Moving sun = blur.
Browse all tools for runtimes that fit this workload.
Local AI workloads have real hardware constraints that vary by task type. VRAM ceiling decides what model fits; bandwidth decides decode speed; compute decides prefill speed. Pick the GPU tier that fits your actual workload, not the spec sheet.
The errors most operators hit when running nerf (neural radiance fields) locally. Each links to a diagnose+fix walkthrough.
Verify your specific hardware can handle nerf (neural radiance fields) before committing money.