Trust moat · Economics methodology

Local vs cloud economics methodology

The local-vs-cloud calculators on /compare/operator-costs and /compare/local-vs-cloud surface ranges, not point estimates. They’re built on a small set of opinionated assumptions chosen so that the output is useful to a US-based operator running a typical homelab or small-team workload. This page documents every assumption, explains the reasoning behind each, and is honest about the situations where the calculator’s answer is the wrong one.

Editorial(methodology)Estimated(assumptions disclosed)
By Fredoline Eruo · Last reviewed 2026-05-08

Electricity at $0.16/kWh

The default electricity price is $0.16 per kWh, which sits near the US national average residential rate. Both calculators expose the rate as a user-editable field so operators in regions with different prices can swap in their actual number. The default exists because the most common operator question is “what would this cost a typical homelab in the US?” and a reasonable national-average answer is more useful than no answer.

Electricity prices vary roughly 5x across regions. A homelab in Quebec at industrial off-peak rates pays a fraction of what a homelab in coastal California pays at peak summer. The calculator’s output for a fixed workload can shift by a factor of 5 depending purely on this input, which is why the field is editable and why we never anchor a buying decision on a single point estimate. The supporting calculator at /resources/electricity-calculator lets operators model their own rate against typical 24/7 and bursty usage profiles.

Three-year amortization

Hardware acquisition cost is amortized over 36 months. This choice reflects three observations: GPUs are largely useful for AI workloads for at least three years; the resale market for consumer GPUs three years out is non-trivial but the residual value is low enough that pretending it doesn’t exist is closer to the truth than relying on it; and most homelab operators don’t track depreciation in any more sophisticated way than “I’ll keep this for a few years.”

Three years is on the short side for enterprise accounting and on the long side for the bleeding-edge user who upgrades each generation. We chose three because most catalog operators aren’t in either group, and the default needs to suit the median. The field is editable for operators who plan differently.

$50/h operator labor

The labor variable is the most contested assumption in any local-vs-cloud comparison. The catalog uses $50/h as a default because it represents a plausible blended rate for a homelab-running technical professional in the US — not their salary, not their billed rate, but the implicit value of the time they would otherwise spend not maintaining a local-AI rig.

The labor multiplier matters most in three places: initial setup time (typically 4–20 hours including driver battles, runtime tuning, and the first failed model load), ongoing maintenance (an hour or two per month to track runtime updates and re-validate workflows), and incident time when something breaks at 11 PM. The cloud comparison gets to claim zero on all three, which is the genuine strength of cloud rentals for spiky usage.

Operators who explicitly enjoy the maintenance work, or who value the privacy and control benefits over time, should set labor to $0 to see the pure infrastructure-cost comparison. Operators billing client time at $200/h and treating their homelab as cost should set it higher. The default is what we believe is the median honest answer for our reader; the field is meant to be edited.

Depreciation curves

The hardware depreciation model is intentionally simple: linear decline to zero over the amortization window, with a small residual-value floor for the GPUs that have a known resale market (RTX 3090, 4090, A100, H100). The floor is not an average; it’s the conservative low end of recent eBay sold-listing data for the device category at the relevant elapsed time. Linear depreciation overstates value loss in year one and understates it in year three; that’s a deliberate trade because the alternative — a more accurate exponential curve — would produce false precision the underlying data can’t support.

For non-GPU hardware (motherboards, cases, PSUs, NVMe drives), we don’t model resale at all. The administrative cost of selling each component on the secondary market typically exceeds the recovered value for everything except GPUs. This bias is towards conservatism in the local-cost number.

The cloud side — how we price rentals

The cloud comparison uses on-demand consumer-friendly rental rates from RunPod, Vast.ai, and Lambda as the default basket. We don’t use the major hyperscalers’ on-demand prices because almost no one running a homelab-sized workload actually pays those; the spot/preemptible rate is the operator- relevant comparison. We don’t use spot rates either because they’re too unstable for the calculator to anchor on. The rental-platform mid-tier on-demand rate is the compromise.

Rates fluctuate week to week as supply changes. The calculator records the rates as of the last editorial review (see the timestamp at the top of this page), and the comparison page carries a note when rates have moved meaningfully since. Operators choosing between cloud providers for actual workloads should price-check current rates rather than relying on the default; the comparison’s purpose is to inform the local-vs-cloud question, not to replace a price-shopping pass.

What the calculation elides

Five honest acknowledgements of what the cost numbers do not capture.

  • Resale value beyond the GPU floor. Realistic operators sell entire built systems on local marketplaces, sometimes for meaningful fractions of build cost. We can’t model your local secondary market; neither can the cloud comparison anti-model your relationship with it.
  • Time-to-first-result. A cloud GPU is available in under a minute; a local rig requires procurement, build time, and configuration. For an operator with an immediate one-week project, cloud is cheaper at any electricity rate because local can’t answer in time at all. The calculator computes steady-state monthly cost and deliberately doesn’t try to model the procurement-window opportunity cost — that’s a decision, not a metric.
  • The frontier-quality gap. No local rig in consumer reach matches the largest closed frontier models on the hardest reasoning workloads. If your workload depends on frontier quality, the local-vs-cloud comparison is the wrong one — the right comparison is local + a frontier API for the hard tasks vs. a frontier-only setup. The catalog doesn’t hide this.
  • Bursty workload variance. The monthly cost assumes a usage profile that’s either continuous or modestly bursty. An operator running 4 hours of training a month and otherwise idle pays nearly the same local cost as an operator running 200 hours, while the cloud cost scales linearly. The calculator surfaces both endpoints; an operator with a bursty workload should weight cloud higher than the steady-state arithmetic suggests.
  • Privacy, control, and learning value. The most common reason operators cite for going local has nothing to do with cost — it’s data residency, latency predictability, or wanting to understand the stack. None of these enter the calculator. They’re acknowledged editorially on the comparison surface.

Why ranges, not precision

Cost calculators that produce single numbers are pretending. A $73.42 monthly cost vs. a $74.18 monthly cost is meaningless; the inputs — electricity, labour, depreciation, cloud rates — each carry uncertainty bands wide enough to swamp the difference. The calculator displays a low-end and a high-end estimate based on conservative and aggressive assumptions for each input, and the recommended reading is the range, not the midpoint. Same discipline as the confidence methodology applied to economics: tier-not-percentage, range-not-point.

Adjacent reading

The standalone electricity calculator lets operators model their region’s rate against typical usage profiles. The operator-costs comparison is the calculator surface itself. The local-vs-cloud comparison is the broader narrative around the same numbers, including the frontier-quality gap and the privacy / control points the cost arithmetic doesn’t capture. The scoring methodology documents the parallel range-not-point discipline applied to catalog scores rather than to dollar figures.

Next recommended step

See your specific local-vs-cloud comparison with editable assumptions for electricity, labour, and amortization.

Back to /resources. See also /editorial-policy for how editorial pricing reviews are done.