Hyperspace (P2P inference network)
Decentralized peer-to-peer AI inference network. 2.7M+ CLI downloads, 2M+ active nodes globally as of April 2026. Three-tier model routing (local registry → DHT → gossip broadcast) supports any GGUF model. The April 2026 milestone: 32 anonymous nodes collaboratively trained a language model in 24 hours — the first cross-consumer-device training run with no trusted infrastructure.
Overview
Decentralized peer-to-peer AI inference network. 2.7M+ CLI downloads, 2M+ active nodes globally as of April 2026. Three-tier model routing (local registry → DHT → gossip broadcast) supports any GGUF model. The April 2026 milestone: 32 anonymous nodes collaboratively trained a language model in 24 hours — the first cross-consumer-device training run with no trusted infrastructure.
Pros
- True P2P inference — no centralized server dependency
- Three-tier model routing finds any node with the model loaded
- Browser client (WebLLM) plus CLI plus tray app
- Cache layer eliminates redundant computation across the network
Cons
- Inference latency depends on network mesh state
- Privacy model still maturing — verify before sending sensitive data
- Smaller model selection vs running locally with full Ollama catalog
Compatibility
| Operating systems | macOS Linux Windows Browser |
| GPU backends | consumer GPUs via node-llama-cpp Apple Silicon WebLLM in browser |
| License | Open source · free (OSS) — pay-per-block on the live network |
Get Hyperspace (P2P inference network)
Frequently asked
Is Hyperspace (P2P inference network) free?
What operating systems does Hyperspace (P2P inference network) support?
Which GPUs work with Hyperspace (P2P inference network)?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.