server
Open source
free (OSS, MIT)

LocalAI

OpenAI-API-compatible drop-in for self-hosted inference, with a multi-backend twist: the same endpoint can serve LLMs (llama.cpp / vLLM under the hood), embeddings, image gen (stable-diffusion.cpp), audio (whisper.cpp), and TTS — each with its own backend selected per-model. The pragmatic choice when you want one server URL and a heterogeneous AI stack behind it.

By Fredoline Eruo·Last verified May 6, 2026·35,000 GitHub stars

Overview

OpenAI-API-compatible drop-in for self-hosted inference, with a multi-backend twist: the same endpoint can serve LLMs (llama.cpp / vLLM under the hood), embeddings, image gen (stable-diffusion.cpp), audio (whisper.cpp), and TTS — each with its own backend selected per-model. The pragmatic choice when you want one server URL and a heterogeneous AI stack behind it.

Stack & relationships

How LocalAI relates to other entries in the catalog — recommended pairings, alternatives, dependencies, and edges to avoid. Each edge carries a one-line operator note from our editorial team.

LocalAI ↔ ecosystem

Works with

  • Works with
    vLLM

    LocalAI can route to a vLLM backend for production-throughput LLM inference while still serving image/audio/TTS through other backends behind the same endpoint.

Alternatives

  • Alternative to
    Ollama

    Both are OpenAI-compatible local servers. Ollama is single-purpose (LLM inference, curated models); LocalAI is multi-modal (LLM + embedding + image + audio + TTS) with backend switching per model. Pick LocalAI when you want one endpoint for a heterogeneous stack.

  • Competes with
    Ollama

    Same OpenAI-API-compatible local server category, different scope. Ollama wins on simplicity; LocalAI wins on multi-modality. Genuine competition for the 'self-hosted multi-purpose AI server' slot.

Depends on

  • Depends on
    llama.cpp

    LocalAI uses llama.cpp as one of several backends for LLM inference. Architecture coverage tracks llama.cpp upstream for the LLM path; image/audio backends are separate.

Pros

  • One endpoint for LLM + embedding + image + audio + TTS
  • Backend switching per model (llama.cpp / vLLM / diffusion / whisper)
  • Strong K8s deployment story via the LocalAI operator

Cons

  • Per-backend performance trails dedicated runtimes (it's a multiplexer, not a specialised engine)
  • Configuration surface is large — model YAMLs accumulate quickly
  • Less battle-tested than vLLM for high-QPS LLM-only workloads

Compatibility

Operating systems
Linux
macOS
Windows
Docker
Kubernetes
GPU backends
NVIDIA CUDA
AMD ROCm
Apple Metal
CPU
LicenseOpen source · free (OSS, MIT)

Get LocalAI

Frequently asked

Is LocalAI free?

LocalAI has a paid tier (free (OSS, MIT)). Check the pricing page for current terms.

What operating systems does LocalAI support?

LocalAI supports Linux, macOS, Windows, Docker, Kubernetes.

Which GPUs work with LocalAI?

LocalAI supports NVIDIA CUDA, AMD ROCm, Apple Metal, CPU. CPU-only inference is also possible but slow.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.