CTranslate2
Specialized transformer inference engine. The reference runtime for Whisper (faster-whisper), NLLB translation, and other encoder-decoder models. Out-of-the-box INT8 quantization with strong CPU performance.
Overview
Specialized transformer inference engine. The reference runtime for Whisper (faster-whisper), NLLB translation, and other encoder-decoder models. Out-of-the-box INT8 quantization with strong CPU performance.
Pros
- Whisper inference reference — faster-whisper uses CTranslate2 under the hood
- Strong CPU INT8 path — runs Whisper Large on a laptop CPU usefully
- Encoder-decoder optimization that LLM runtimes don't prioritize
Cons
- Decoder-only LLMs are not the primary target — vLLM / llama.cpp lead there
- Smaller community than the LLM-focused runtimes
- Conversion step required for non-supported model types
Compatibility
| Operating systems | Windows macOS Linux |
| GPU backends | NVIDIA CUDA Apple |
| License | Open source · free + open-source |
Runtime health
Operator-grade signals on how actively CTranslate2 is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.
Release cadence
Derived from the most recent editorial signal on this row.
6 days since last refresh · source: lastUpdated
Benchmark freshness
How recent the editorial measurements on this runtime are.
No editorial benchmarks for this runtime yet.
Community reproduction
Submissions that match an editorial measurement on similar hardware.
No community reproductions on file yet.
Get CTranslate2
Frequently asked
Is CTranslate2 free?
What operating systems does CTranslate2 support?
Which GPUs work with CTranslate2?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.
Related — keep moving
Verify CTranslate2 runs on your specific hardware before committing money.