Drop-in OpenAI-compatible proxy across 100+ providers. Route to local Ollama or cloud, same code.
Editorial verdict: “Best universal LLM proxy. Foundational layer for multi-provider deployments.”
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
LiteLLM is a proxy that exposes OpenAI's API shape but routes to 100+ backends: Anthropic, Gemini, Ollama, Together, Groq, local llama.cpp, etc. Drop it between your app and your LLMs, and your app code stays OpenAI-shaped while you swap providers freely. Excellent for migration and A/B testing.
Thin SDK / proxy / compatibility layer.
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.