agent
Open source
free

AutoGen

Microsoft's multi-agent conversation framework. Define multiple LLM-backed agents (with distinct system prompts, tools, and capabilities) and let them converse, plan, and execute tasks together. The biggest framework for orchestrating teams of specialised agents — code-writer + code-reviewer + executor patterns are the canonical use case. Backend-agnostic: works against any OpenAI-compatible endpoint, which includes most local runtimes (Ollama, vLLM, llama.cpp server). Python-first, with a v0.4 rewrite that split the codebase into autogen-core / autogen-agentchat / autogen-ext layers for cleaner extension.

By Fredoline Eruo·Last verified May 13, 2026·40,000 GitHub stars

Overview

Microsoft's multi-agent conversation framework. Define multiple LLM-backed agents (with distinct system prompts, tools, and capabilities) and let them converse, plan, and execute tasks together. The biggest framework for orchestrating teams of specialised agents — code-writer + code-reviewer + executor patterns are the canonical use case. Backend-agnostic: works against any OpenAI-compatible endpoint, which includes most local runtimes (Ollama, vLLM, llama.cpp server). Python-first, with a v0.4 rewrite that split the codebase into autogen-core / autogen-agentchat / autogen-ext layers for cleaner extension.

Pros

  • Most mature multi-agent framework — battle-tested
  • Works with any OpenAI-compatible local endpoint
  • Studio UI for non-Python workflows
  • Strong code-execution + tool-use patterns out of the box

Cons

  • Multi-agent debugging is genuinely hard — chain explosions common
  • v0.4 API differs sharply from v0.2 — community is split across both
  • Token cost compounds fast (every agent's turn is a full inference call)

Compatibility

Operating systems
linux
macos
windows
GPU backends
cuda
rocm
metal
cpu
LicenseOpen source · free

Runtime health

Operator-grade signals on how actively AutoGen is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.

Release cadence

Derived from the most recent editorial signal on this row.

Active
Updated May 13, 2026

1 days since last refresh · source: lastUpdated

Benchmark freshness

How recent the editorial measurements on this runtime are.

0editorial benchmarks

No editorial benchmarks for this runtime yet.

Community reproduction

Submissions that match an editorial measurement on similar hardware.

0reproduced reports

No community reproductions on file yet.

Get AutoGen

Frequently asked

Is AutoGen free?

Yes — AutoGen is free to download and use and open-source under a permissive license.

What operating systems does AutoGen support?

AutoGen supports linux, macos, windows.

Which GPUs work with AutoGen?

AutoGen supports cuda, rocm, metal, cpu. CPU-only inference is also possible but slow.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.

Related — keep moving

Recommended hardware
Before you buy

Verify AutoGen runs on your specific hardware before committing money.