Ecosystem map · Updated May 6, 2026

The local AI agent ecosystem

Six zones covering the surfaces a developer touches when building or deploying agents that run partly or wholly on local hardware. Catalog entries are linked from each card; deeper architecture references are in /systems.

By Fredoline Eruo · Reviewed monthly

Coding agents

Tools that take a problem statement and produce code changes — branches, edits, PRs. The 2026 lineup splits into closed leaders (Claude Code, Cursor, GitHub Copilot) and open challengers (OpenHands, Aider, Cline, Goose). Local-LLM support varies sharply.

agent

Claude Code

30k

Anthropic's terminal-native coding agent. Tops SWE-bench Verified at 87.6% and SWE-bench Pro at 64.3% in 2026. Deep MCP integration, agentic file editing, and a $20/mo Pro tier are

ide

Cursor

Anysphere's AI-native IDE. Forks VS Code with Cursor Tab inline completion, agentic chat, and background agents. Best 'flow' for inline completion in 2026.

agentOSS

OpenHands

72k

AI-driven development agent that completes engineering tasks end-to-end — branches, code, PRs. v1.6 added a Planning Mode that drafts a plan before executing. Local-LLM-friendly vi

agentOSS

Aider

30k

Terminal-based AI pair programmer. Run in your project directory, describe a change, it edits files and creates meaningful git commits. Works with any LLM — local Ollama, Anthropic

agentOSS

Cline

50k

VS Code extension agent — ~4M installs in 2026. Plan/Act mode, autonomous file edits with diff approval, terminal access. The leading open-source IDE agent.

agentOSS

Continue

25k

Open-source VS Code and JetBrains assistant. Configurable autocomplete + chat + agent modes. Strong with local Ollama backends.

agentOSS

Goose

18k

Open-source extensible AI agent now governed by the Agentic AI Foundation (AAIF) at the Linux Foundation. Started inside Block (formerly Square). 25+ provider support including Oll

agentOSS

Roo Code (sunsetting May 15, 2026)

16k

Open-source AI dev-team extension for VS Code (1.55M installs, 23.8k GitHub stars). **Discontinued: all Roo Code products — Extension, Cloud, and Router — shut down on May 15, 2026

Personal AI agents

The non-coding side: assistants that connect models to messaging surfaces, productivity apps, and long-running task workflows. OpenClaw is the runaway 2026 release here.

Memory frameworks

Agents that remember across sessions need a memory layer. The 2026 split is between drop-in APIs (Mem0), OS-style explicit management (Letta), and graph-based reasoning (Mem0g, Zep / Graphiti).

MCP protocol layer

The open standard that ties LLM clients to external tools. 500+ public servers. Dive into the protocol details before deploying — see our MCP system guide for architecture, lifecycle, and security.

Local inference runtimes

The runtime that hosts the model weights. llama.cpp / Ollama for accessibility, vLLM for throughput, MLX on Apple, ExLlamaV2 for ExLlama-quant speed. The choice you make here constrains which agents and memory frameworks pair cleanly.

Distributed + P2P inference

The newest zone. Hyperspace pioneered consumer-device P2P inference; vLLM remains the production multi-node standard. Watch this category — it's where the next moat shifts.

How this map updates

This page reads its zones live from the catalog. When a new tool ships and lands in our scripts/seed/agents.ts or scripts/seed/tools.ts file, it shows up here automatically. The editorial framing — zone titles, blurbs, "what changed" — is hand-written and refreshed on the first business day of each month. If the ecosystem shifts mid-cycle (a major release, a deprecation, a new zone emerging), we update sooner.

Going deeper