llama-cpp-python
Python bindings for llama.cpp with an OpenAI-compatible HTTP server. The fastest path from `pip install` to a working local-LLM endpoint. Ships pre-built wheels with optional CUDA / Metal / ROCm / Vulkan support.
Overview
Python bindings for llama.cpp with an OpenAI-compatible HTTP server. The fastest path from `pip install` to a working local-LLM endpoint. Ships pre-built wheels with optional CUDA / Metal / ROCm / Vulkan support.
Pros
- Python ecosystem integration — drop-in replacement for OpenAI client code
- Pre-built wheels per backend — no compile-from-source required
- Inherits llama.cpp's broad quant support (GGUF + every k-quant)
Cons
- Throughput trails vLLM at concurrency — single-user / small-team only
- Backend version pin (cuBLAS / Metal / ROCm) requires reinstall to switch
- Lags llama.cpp main on the latest features by 1-2 weeks typically
Compatibility
| Operating systems | Windows macOS Linux |
| GPU backends | NVIDIA CUDA AMD ROCm Apple Vulkan |
| License | Open source · free + open-source |
Runtime health
Operator-grade signals on how actively llama-cpp-python is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.
Release cadence
Derived from the most recent editorial signal on this row.
6 days since last refresh · source: lastUpdated
Benchmark freshness
How recent the editorial measurements on this runtime are.
No editorial benchmarks for this runtime yet.
Community reproduction
Submissions that match an editorial measurement on similar hardware.
No community reproductions on file yet.
Get llama-cpp-python
Frequently asked
Is llama-cpp-python free?
What operating systems does llama-cpp-python support?
Which GPUs work with llama-cpp-python?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.
Related — keep moving
Verify llama-cpp-python runs on your specific hardware before committing money.