Simultaneous Speech-to-Speech Translation Without Aligned Data
Abstract
Simultaneous speech translation is the task of translating source speech into a target language in real-time. Given that the dependencies between source and target words are non-monotonic (e.g. the word order can change between German and English), this means learning to jointly align and translate. This task has been traditionally tackled through supervised training on aligned data, and as collecting such data is challenging, this relies on synthetic data with automatic alignment. The latter relies on heuristics that are language-specific and suboptimal. We instead propose Hibiki-Zero, a model for simultaneous speech translation trained without word-level alignments between source and target speech. To do so, we train on sentence-level aligned data so that the model learns to perform speech translation but with high latency. We then introduce a novel reinforcement learning strategy relying on GRPO to optimize the translation latency of the model while retaining its translation capabilities. After supervised and post-training, Hibiki-Zero performs multilingual simultaneous translation with state-of-the-art translation accuracy, latency, voice transfer and naturalness across five X-to-English tasks. Moreover, we demonstrate that our model can be easily finetuned to support another language as input with less than 1000h of speech data. We provide examples (hibiki-zero-s2st.github.io) as well as models and release a benchmark containing 15h of multilingual data for speech translation evaluation.