RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← Home·/apps·Chat UI

Jan

Fully offline

Privacy-first desktop chat with a curated model catalog. Llama / Mistral / Qwen one click from the app.

Editorial verdict: “Best one-binary desktop chat. Curated catalog removes 'which model?' decision paralysis.”

Chat UI
Free
AGPL-3.0
★ 4.5 / 5
GitHub ★ 28,000
↗ Homepage↗ GitHub↗ Docs

Compatibility at a glance

Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"

§ Runtimes supported
llama-cppollamaopenai-compat
§ OS / platform
macoslinuxwindows
§ Hardware + model hint
Minimum VRAM
4 GB
Recommended starter model
Llama 3.1 8B Q4_K_M
→ Build the rest of the stack with /stack-builder→ Pick a GPU for this app

What it is

Jan ships as a desktop binary (no Docker, no terminal). The first-run experience picks a model from its curated catalog, downloads, and you're chatting in under three minutes. Talks to its own embedded llama.cpp runtime or to external Ollama / OpenAI-compatible endpoints. Strong choice for non-technical users who want a 'just works' local chat app.

✓ Strengths

  • +Single-binary install — no terminal required
  • +Curated model catalog with one-click download
  • +Built-in OpenAI-compatible API server

△ Caveats

  • −Smaller ecosystem than Open WebUI
  • −RAG / file-upload features are less mature

About the Chat UI category

Web or desktop chat client that connects to your local runtime.

§ Other chat ui apps
AnythingLLM

Best fast-RAG app. Workspace model is the right abstraction for doc-corpora chat.

Open WebUI

Best default chat UI for solo Ollama users. Pick this first; switch only if you outgrow it.

LibreChat

Best if you mix local + cloud models in the same workflow. Strong team features.

Where to go from here

Stack Builder →

Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.

Back to /apps →

The full directory — filter by category, runtime, OS, privacy posture, or VRAM.

Runtimes (/tools) →

What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.

Community benchmarks →

Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.