Skip to yearly menu bar Skip to main content


A fully differentiable beam search decoder

Ronan Collobert · Awni Hannun · Gabriel Synnaeve

Pacific Ballroom #226

Keywords: [ Structured Prediction ] [ Speech Processing ] [ Deep Sequence Models ]


We introduce a new beam search decoder that is fully differentiable, making it possible to optimize at training time through the inference procedure. Our decoder allows us to combine models which operate at different granularities (e.g. acoustic and language models). It can be used when target sequences are not aligned to input sequences by considering all possible alignments between the two. We demonstrate our approach scales by applying it to speech recognition, jointly training acoustic and word-level language models. The system is end-to-end, with gradients flowing through the whole architecture from the word-level transcriptions. Recent research efforts have shown that deep neural networks with attention-based mechanisms can successfully train an acoustic model from the final transcription, while implicitly learning a language model. Instead, we show that it is possible to discriminatively train an acoustic model jointly with an \emph{explicit} and possibly pre-trained language model.

Live content is unavailable. Log in and register to view live content