Verba
Weaviate's open-source RAG demo turned production. Strong defaults, opinionated stack.
Editorial verdict: “Best for 'don't make me choose chunking strategy' teams. Opinionated stack works.”
Compatibility at a glance
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
What it is
Verba is Weaviate's reference RAG app: ingestion → chunk → embed → store → retrieve → generate, with a clean React UI. The selling point is the opinionated stack — you don't pick a vector store, embedder, or chunking strategy, just point it at your data. Talks to Ollama, OpenAI, Cohere, Anthropic. Good for teams that want to ship internal-doc chat fast.
✓ Strengths
- +Opinionated stack — fewer decisions
- +Clean React UI with citation tracing
- +Excellent default chunking + retrieval params
△ Caveats
- −Tied to Weaviate (or you do the swap yourself)
- −Less flexible than PrivateGPT if you want to swap components
About the RAG app category
Document retrieval + chat, fully offline-capable.
Where to go from here
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.