Translating audio across languages while preserving speaker voice. Combines STT → translation → cloned-voice TTS.
Install the pipeline components:
pip install faster-whisper (transcribe source audio)ollama pull aya-expanse:8b (translate transcript)pip install f5-tts (speak translation in original voice)Pipeline script:
# Step 1: Transcribe source audio
from faster_whisper import WhisperModel
stt = WhisperModel("large-v3", device="cuda")
segments, _ = stt.transcribe("source.mp3")
transcript = " ".join([s.text for s in segments])
# Step 2: Translate
import ollama
resp = ollama.chat(model="aya-expanse:8b", messages=[{
"role": "user",
"content": f"Translate to Spanish: {transcript}"
}])
translated = resp["message"]["content"]
# Step 3: Synthesize with cloned voice
from f5_tts import F5TTS
tts = F5TTS("F5-TTS", "cuda")
tts.infer(translated, ref_audio="speaker_reference.wav", output="dubbed.wav")
Used RTX 3060 12 GB (~$200-250, see /hardware/rtx-3060-12gb). Runs the full dubbing pipeline: Whisper large-v3 STT at 15-20× real-time, Aya Expanse 8B translation at 40-60 tok/s, F5-TTS cloning at near-real-time. A 5-minute video dubs in ~20-30 minutes. Pair with Ryzen 5 5600 + 32 GB DDR4 + 1TB NVMe. Total: ~$390-440. For CPU-only: Whisper medium + Kokoro TTS (preset voices only) + NLLB-200-distilled-600M (lighter translator) — dubs a 5-minute video in ~1-2 hours. Functional but slow.
Used RTX 3090 24 GB (~$700-900, see /hardware/rtx-3090). Runs the full pipeline: Whisper large-v3 STT at 20-30× real-time, Aya Expanse 32B translation at 25-40 tok/s (dramatically better translation quality than 8B), F5-TTS cloning. A 30-minute video dubs in ~1-2 hours. For production dubbing (entire TV series, 100+ hours), batch pipeline overnight. Total: ~$1,800-2,200. Dubbing quality is a chain: STT accuracy × translation quality × voice cloning fidelity. Invest in the weakest link — often translation for complex content.
The mistake: Running Whisper tiny.en on accented English speech, getting 60% accuracy, translating the garbled transcript, then blaming the TTS for "weird robotic dubbing." Why it fails: The pipeline is only as strong as its weakest link. Whisper tiny.en has ~70% word error rate on accented or noisy speech. The translation model gets garbage input → produces approximate output. The TTS faithfully reads the wrong words. You blame the TTS, but STT is the culprit. The fix: Always use Whisper large-v3 for dubbing source transcription. The 3 GB model is worth it — 95%+ accuracy on clean speech, 85-90% on accented/noisy speech. Check the transcript BEFORE translating. If the transcript has errors, fix them manually or re-transcribe. A 5-minute manual transcript review saves 2 hours of re-dubbing downstream garbage.
Browse all tools for runtimes that fit this workload.
Audio models are surprisingly forgiving on hardware. Whisper, Coqui, OpenAI Whisper-cpp all run well on 8-12 GB GPUs. The bottleneck is rarely the GPU; it's audio preprocessing and disk I/O for batch transcription.
The errors most operators hit when running dubbing & translation locally. Each links to a diagnose+fix walkthrough.
Verify your specific hardware can handle dubbing & translation before committing money.