RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← /pulse/nvidia-b200-production-ramp-pricing
INFOPRICING CHANGE·2026-05-01

NVIDIA B200 production ramps — used H100 SXM softens to ~$22-25k

▼ WHAT HAPPENED

NVIDIA B200 production volumes increased through Q1 2026, driving spot availability across hyperscalers and Tier-2 cloud providers. Secondary effect: used H100 SXM market softened from peak $40k+ to $22-25k for clean used cards with documented service history. H200 SXM held pricing better, settling around $28-31k retail vs B200 at $40-43k.

▼ OPERATOR ANGLE

**For new cap-ex**: pick [H200 SXM](/hardware/nvidia-h200) over H100 SXM for nearly all new builds — 76% more memory + 43% more bandwidth at +20-25% price is almost always worth it. H100 SXM only when matching existing cluster. **For frontier training**: [B200](/hardware/nvidia-b200) at $40k retail breakeven vs H200 cluster on FP4-aggressive workloads. Run the math against your actual workload — if FP4 gain isn't ≥30%, H200 cluster wins on $/throughput. **For used market**: used H100 SXM at $22-25k is a buying opportunity if you have existing DGX H100 infrastructure. Capacity adds make sense; greenfield H100 SXM cluster does not. **For cloud rental**: H100 SXM rental dropped to $2.50-3.50/hr on Runpod/Lambda. Cap-ex breakeven now requires sustained 70%+ utilization for 12+ months. See [B200 vs H200 verdict](/compare/hardware/nvidia-b200-vs-nvidia-h200), [H100 SXM verdict](/hardware/nvidia-h100-sxm).
SOURCE: https://www.nvidia.com/en-us/data-center/b200/[VENDOR-PRESS]

▼ ENTITIES REFERENCED

HARDWARENVIDIA B200HARDWARENVIDIA GB200 NVL72HARDWARENVIDIA H200HARDWARENVIDIA H100 SXM
[pulse item] · runlocalai.co/pulse/nvidia-b200-production-ramp-pricing