Docs-aware chat with workspaces. Drop a folder of PDFs, get a working RAG chatbot in 5 minutes.
Editorial verdict: “Best fast-RAG app. Workspace model is the right abstraction for doc-corpora chat.”
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
AnythingLLM is built around 'workspaces' — each one is a chat + a knowledge base + a model config. Drop a PDF folder, the app chunks and embeds it locally (or via OpenAI), and you can chat against it. Talks to Ollama, LM Studio, local embedding models, and many cloud providers. The fastest path from 'I have a folder of docs' to 'I have a chatbot for that folder.'
Web or desktop chat client that connects to your local runtime.
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.