runner
Open source
free + open-source

llama-cpp-python

Python bindings for llama.cpp with an OpenAI-compatible HTTP server. The fastest path from `pip install` to a working local-LLM endpoint. Ships pre-built wheels with optional CUDA / Metal / ROCm / Vulkan support.

By Fredoline Eruo·Last verified May 7, 2026·9,200 GitHub stars

Overview

Python bindings for llama.cpp with an OpenAI-compatible HTTP server. The fastest path from `pip install` to a working local-LLM endpoint. Ships pre-built wheels with optional CUDA / Metal / ROCm / Vulkan support.

Pros

  • Python ecosystem integration — drop-in replacement for OpenAI client code
  • Pre-built wheels per backend — no compile-from-source required
  • Inherits llama.cpp's broad quant support (GGUF + every k-quant)

Cons

  • Throughput trails vLLM at concurrency — single-user / small-team only
  • Backend version pin (cuBLAS / Metal / ROCm) requires reinstall to switch
  • Lags llama.cpp main on the latest features by 1-2 weeks typically

Compatibility

Operating systems
Windows
macOS
Linux
GPU backends
NVIDIA CUDA
AMD ROCm
Apple
Vulkan
LicenseOpen source · free + open-source

Runtime health

Operator-grade signals on how actively llama-cpp-python is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.

Release cadence

Derived from the most recent editorial signal on this row.

Active
Updated May 7, 2026

6 days since last refresh · source: lastUpdated

Benchmark freshness

How recent the editorial measurements on this runtime are.

0editorial benchmarks

No editorial benchmarks for this runtime yet.

Community reproduction

Submissions that match an editorial measurement on similar hardware.

0reproduced reports

No community reproductions on file yet.

Get llama-cpp-python

Frequently asked

Is llama-cpp-python free?

llama-cpp-python has a paid tier (free + open-source). Check the pricing page for current terms.

What operating systems does llama-cpp-python support?

llama-cpp-python supports Windows, macOS, Linux.

Which GPUs work with llama-cpp-python?

llama-cpp-python supports NVIDIA CUDA, AMD ROCm, Apple, Vulkan. CPU-only inference is also possible but slow.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.

Related — keep moving

Before you buy

Verify llama-cpp-python runs on your specific hardware before committing money.