Continue
Open-source autocomplete + chat for VS Code and JetBrains. Local-model-first.
Editorial verdict: “Best Copilot replacement that defaults to local. Configurable; pair with Qwen 2.5 Coder.”
Compatibility at a glance
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
What it is
Continue is the open-source rival to Cursor and Copilot. It does autocomplete (FIM with a local code-completion model), inline chat, and edit-via-prompt — all configurable to use Ollama, llama.cpp, or any OpenAI-compatible endpoint. The default config nudges you toward local; cloud is an option, not the default.
✓ Strengths
- +Single extension covers autocomplete + chat + edit
- +Excellent local-model defaults
- +JetBrains support is on par with VS Code
△ Caveats
- −Autocomplete latency on slower hardware is noticeable (~150-300ms)
- −Configuration is YAML-heavy
About the Coding agent category
Editor-integrated or CLI agent that edits code via your model.
Best terminal-native coding agent for local models. Qwen 2.5 Coder 32B is its sweet spot.
Best self-hosted server for teams. SSO + audit logs make it the IT-friendly pick.
Best minimal-surface Copilot-replacement that's been Ollama-native since day one.
Best IDE-integrated agent that fully respects 'all local' as a first-class option.
Where to go from here
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.