server
Open source
free (OSS) — pay-per-block on the live network
3.9/5

Hyperspace (P2P inference network)

Decentralized peer-to-peer AI inference network. 2.7M+ CLI downloads, 2M+ active nodes globally as of April 2026. Three-tier model routing (local registry → DHT → gossip broadcast) supports any GGUF model. The April 2026 milestone: 32 anonymous nodes collaboratively trained a language model in 24 hours — the first cross-consumer-device training run with no trusted infrastructure.

By Fredoline Eruo·Last verified May 6, 2026·12,000 GitHub stars

Overview

Decentralized peer-to-peer AI inference network. 2.7M+ CLI downloads, 2M+ active nodes globally as of April 2026. Three-tier model routing (local registry → DHT → gossip broadcast) supports any GGUF model. The April 2026 milestone: 32 anonymous nodes collaboratively trained a language model in 24 hours — the first cross-consumer-device training run with no trusted infrastructure.

Pros

  • True P2P inference — no centralized server dependency
  • Three-tier model routing finds any node with the model loaded
  • Browser client (WebLLM) plus CLI plus tray app
  • Cache layer eliminates redundant computation across the network

Cons

  • Inference latency depends on network mesh state
  • Privacy model still maturing — verify before sending sensitive data
  • Smaller model selection vs running locally with full Ollama catalog

Compatibility

Operating systems
macOS
Linux
Windows
Browser
GPU backends
consumer GPUs via node-llama-cpp
Apple Silicon
WebLLM in browser
LicenseOpen source · free (OSS) — pay-per-block on the live network

Get Hyperspace (P2P inference network)

Frequently asked

Is Hyperspace (P2P inference network) free?

Hyperspace (P2P inference network) has a paid tier (free (OSS) — pay-per-block on the live network). Check the pricing page for current terms.

What operating systems does Hyperspace (P2P inference network) support?

Hyperspace (P2P inference network) supports macOS, Linux, Windows, Browser.

Which GPUs work with Hyperspace (P2P inference network)?

Hyperspace (P2P inference network) supports consumer GPUs via node-llama-cpp, Apple Silicon, WebLLM in browser. CPU-only inference is also possible but slow.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.