BLK · CATEGORY · MESSAGING-NATIVE AGENTS · MAY 2026

Personal AI agents with messaging UX.

OpenClaw + alternatives — May 2026

A new category emerged in 2026: AI agents that run on your machine and reach you through the messaging apps you already live in — WhatsApp, Telegram, Slack, Discord, Signal, iMessage. The breakout reference is OpenClaw (~347k GitHub stars, foundation-stewarded), but the pattern itself matters more than any one tool. This page covers the category, the current options, and the honest tradeoffs — without endorsement.

Published 2026-05-13Reviewed May 2026/tools/openclaw catalog page →
§ 01

TL;DR

The category — local-first personal-AI agents reached through messaging apps — is real in 2026. OpenClaw (~347k★ as of May 2026, per github.com/openclaw/openclaw) is the breakout reference implementation, but the pattern is already being copied (NVIDIA NemoClaw fork; Letta and Mem0 experimenting with messaging integrations).

The shape that makes the category work: a daemon on your machine, OpenAI-compatible client pointed at Ollama or llama.cpp, message-bus integrations into WhatsApp / Telegram / Slack / Discord / Signal / iMessage, plus persistent memory across sessions and skill-based tool execution.

If you want a multi-agent orchestration framework for building structured workflows instead, you want AutoGen or CrewAI — different shape of tool, covered in State of Local AI § 5.

§ 02

The category — what changed in 2026

The dominant local-AI interface pattern in 2024-2025 was the dedicated browser tab — Open WebUI, LM Studio, AnythingLLM. Useful but ceremonial: you remembered to open the tab when you wanted the agent, the agent was forgotten the moment you closed it.

The 2026 pattern flips that. The agent runs persistently on your machine. It has memory across conversations. And it reaches you where you already live — the chat app you already check 50 times a day. The friction of “remember to use the AI” goes to zero.

The traction is unambiguous: OpenClaw past ~347k stars, climbed to the top of GitHub Trending in Q1 2026, and now has imitators across the ecosystem. The category is real and growing — and the question for an operator is no longer “should I have a local agent in my messaging app?” but “which option in the category fits me?”

§ 03

Editorial stance

RunLocalAI is brand-agnostic. We don't earn referral fees from OpenClaw, NemoClaw, Letta, Mem0, or any of the alternatives covered here. The mission is to make running AI locally a usable default — not to promote any specific tool inside that space.

This page covers OpenClaw at length because of its category leadership by traction (~347k stars, most-installed implementation, defines what the category looks like in practice). That coverage isn't an endorsement. Read the security caveats in § 9 carefully; read the “Bad fit” list in § 10. Match the tool to your situation, not the hype.

Same stance applies on every tool-focused editorial across the site — see also /how-we-make-money and /editorial-policy.

§ 04

How the pattern works

The simplest framing: OpenClaw is to a local-LLM agent what ChatGPT is to a frontier-cloud LLM — an opinionated interface built on top of a backend. The backend is your own machine; the interface is the messaging app you already check 50 times a day.

That sounds small. It isn't. The dominant local-AI UX in 2024-2025 was a separate browser tab (Open WebUI, LM Studio, AnythingLLM) — useful but ceremonial. OpenClaw flips this: you talk to your agent from the same Telegram thread you use for everything else, and the agent has persistent memory across sessions, tool access to your filesystem, and shell-execution capability. The friction of “remember to open the local AI tab” goes to zero.

Under the hood: a Python daemon that exposes an OpenAI- compatible client (so any model with an OpenAI-API endpoint works), a message-bus integration to each chat platform, and a sandboxed skill execution layer. Cross-platform across Mac, Windows, and Linux.

§ 05

Clawdbot → Moltbot → Molty → OpenClaw

Project chronology, because the rename chain is part of why it can be hard to search for:

  • November 2025 — Clawdbot. Peter Steinberger publishes the first version. Initial focus: WhatsApp + Telegram for personal use.
  • Late 2025 — Moltbot / Molty. Brief renames during the trademark cleanup phase. Most surviving docs from this era reference one of these three names.
  • January 30, 2026 — OpenClaw. Final rename + public open-source push. Star growth accelerates dramatically — past 200k by mid-February.
  • February 14, 2026 — Steinberger joins OpenAI (per public Wikipedia + KDnuggets coverage cited at § Sources). Project transitions to a non-profit foundation for stewardship — maintainer cadence preserved, community governance now in place.
  • April 2026 — NVIDIA “NemoClaw” partnership. NVIDIA Nemotron Labs publishes a fork / integration that routes OpenClaw through Nemotron models. Validates the category but isn't the upstream project.
  • May 2026 — ~347k+ stars. The category leader, with multiple imitators emerging.
§ 06

The messaging-UX bet

The thesis behind OpenClaw's success: people don't want a new app for AI. They want their AI inside the apps they already trust. Steinberger's observation — backed by ~347k stars worth of agreement — was that the friction wasn't the model or the local runtime; it was opening a separate UI to talk to it.

Supported message surfaces as of May 2026:

  • WhatsApp (via the Business API)
  • Telegram (the canonical first integration)
  • Slack (workspace bot)
  • Discord (server bot + DMs)
  • Signal (via the official CLI)
  • iMessage (macOS-only — uses local AppleScript integration; the most fragile of the integrations because Apple's APIs aren't designed for this)
  • +20 more via the plugin marketplace

The practical implication is that “OpenClaw + Ollama + Hermes 3 8B” on a 16GB-VRAM Mac becomes a private, always-on agent you can DM from any device while you're away from your desk. That's a fundamentally different relationship to local AI than opening Open WebUI when you happen to remember.

§ 07

Install + connect to a local model

The shortest path to a working install against a local Ollama:

# 1. Make sure Ollama is running and has a tool-use model
ollama serve &
ollama pull hermes3:8b      # 12GB VRAM
# (or hermes3:70b if you have 48GB+ unified)

# 2. Install OpenClaw
curl -fsSL https://openclaw.ai/install.sh | sh
# OR via package manager:
brew install openclaw       # macOS
# OR build from source:
git clone https://github.com/openclaw/openclaw
cd openclaw && pip install -e .

# 3. Configure to use local Ollama
openclaw config set llm.provider ollama
openclaw config set llm.base_url http://127.0.0.1:11434
openclaw config set llm.model hermes3:8b

# 4. Connect a messaging surface (Telegram is the easiest first one)
openclaw connect telegram
# follow the prompts to register a BotFather token

# 5. Start the daemon
openclaw start

From there, message your bot on Telegram. The first response will be slower (model load) — subsequent messages stream tokens in real time. Persistent context works across sessions by default; the agent remembers past conversations.

INSTALL NOTE

The exact install script flags shift between releases — always cross-check the official docs.openclaw.ai before running anything. The snippet above is May 2026.

§ 08

AgentSkills — 100+ pre-configured

AgentSkills are OpenClaw's plugin abstraction. Each skill is a Python module exposing one or more tool calls the agent can invoke. The repository ships a maintained set (100+ skills as of May 2026 — see /skills directory) covering the categories most operators want:

SHELL + FILES

Run shell commands, read/write files, manage Git repos, edit configs.

WEB + BROWSER

Web search, page fetch, headless-Chrome automation, form fill.

CALENDAR + MAIL

Read/write Google Calendar, Outlook, Apple Calendar, IMAP/SMTP for mail.

CODE + DEV

GitHub API, Jira, Linear, sandboxed Python REPL, code-review hooks.

Writing a custom skill is a single Python file with a decorator — closer to a Smolagents tool than a full LangChain abstraction. Lightweight by design.

§ 09

Other options in the category

OpenClaw is in a different shape than the multi-agent frameworks. The comparison most people are searching for:

ToolShapeWhen to pick
OpenClawPersonal AI agent + messaging UXOne person + always-on assistant inside Telegram/WhatsApp/Slack
SmolagentsMinimal framework with CodeAgent patternBuilding custom agents in Python; you write the code
NemoClaw (NVIDIA)OpenClaw fork routed through Nemotron modelsYou're inside NVIDIA's ecosystem; want vendor support
CrewAIRole-based multi-agent crewsStructured workflows with multiple agents collaborating
AutoGenFree-form multi-agent conversationResearch / experimental multi-agent setups

The honest framing: OpenClaw isn't competing with AutoGen / CrewAI / Smolagents — it's competing with opening a separate AI tab in your browser. The closest peer is Letta (formerly MemGPT) on the persistent-memory axis, and Mem0 on the memory-as-a-service axis — but neither ships the messaging-UX layer.

§ 10

Security caveats — honest

Regulators have publicly flagged security risks with OpenClaw, and the editorial position here is to take those flags seriously. The specific concerns:

  • Shell + file access. AgentSkills can run shell commands on your machine. A prompt-injection attack via a message could theoretically get the agent to rm -rf or exfiltrate files. Sandboxing exists but is opt-in.
  • Messaging-platform attack surface. Anyone who can DM your bot can send instructions. The default config has no allowlist — anyone who finds the bot ID on Telegram or knows the Signal number can start a session.
  • Plugin marketplace risk. The 100+ official skills are vetted. The third-party marketplace is not — installing a community skill is running arbitrary Python.
  • Memory exfiltration. Persistent memory across sessions is the killer feature, but it also means a single prompt-injection can read every prior conversation.

The mitigations the foundation maintains:

  • Allowlist of authorized chat IDs (auth.allowed_users in config) — set this BEFORE you connect a messaging surface.
  • Sandboxed-shell mode (skills.shell.sandbox = docker) — runs shell commands in a constrained container.
  • Memory-redact filter for sensitive patterns (API keys, credentials, PII) on write.
  • Per-skill audit log — every tool call is logged with input + output for forensic review.

Operator recommendation: enable the allowlist, sandbox the shell, audit the redact filter list, and disable the third-party marketplace unless you trust a specific maintainer. The defaults assume you're on a trusted network — they don't enforce it.

§ 11

Who should (and shouldn't) use it

Good fit:

  • • Solo operator who wants an always-on assistant in Telegram / Slack / Signal
  • • Privacy-first user who refuses to send data to cloud APIs
  • • Mac / Linux desktop user with at least 16GB unified or a 12GB+ GPU
  • • Developer comfortable enabling allowlists + sandbox config before exposing the bot publicly

Bad fit:

  • • Locked-down enterprise environment (no IT approval path for desktop daemons with shell access)
  • • Multi-user shared inference scenarios — use vLLM + CrewAI instead
  • • Anyone unwilling to configure auth.allowed_users explicitly before connecting messaging surfaces
  • • Users who want a vendor-supported SLA — the foundation provides community support, not paid SLAs
§ 12

Where the category goes from here

The OpenClaw foundation has published a 2026 roadmap with three publicly-committed milestones:

  • v1.0 GA (Q3 2026). Stabilized core API, plugin v2 spec, official Docker deployment template.
  • Sandboxing-by-default (Q4 2026). Shell skills move to sandbox mode by default; opt-out requires explicit config flag.
  • Federated identity (2027). Multi-device, single-agent — your phone, laptop, and desktop all sharing one OpenClaw identity + memory pool.

The category OpenClaw created — local-first agent + native messaging UX — already has imitators (NemoClaw, Letta experimenting with messaging integrations, several smaller forks). The category is real and growing; OpenClaw is the reference implementation.

SOURCES
/tools/openclaw catalog →

Compatibility matrix + pros/cons + GitHub link.

/quickstart Docker bundles →

Get the Ollama backend running first, then point OpenClaw at it.

State of Local AI 2026 →

Where OpenClaw sits in the broader 2026 agentic landscape.

macOS 26.5 for local AI →

The recommended Mac-side stack — OpenClaw pairs naturally.