Timezone: »

 
Poster
Word-Level Speech Recognition With a Letter to Word Encoder
Ronan Collobert · Awni Hannun · Gabriel Synnaeve

Tue Jul 14 12:00 PM -- 12:45 PM & Tue Jul 14 11:00 PM -- 11:45 PM (PDT) @

We propose a direct-to-word sequence model which uses a word network to learn word embeddings from letters. The word network can be integrated seamlessly with arbitrary sequence models including Connectionist Temporal Classification and encoder-decoder models with attention. We show our direct-to-word model can achieve word error rate gains over sub-word level models for speech recognition. We also show that our direct-to-word approach retains the ability to predict words not seen at training time without any retraining. Finally, we demonstrate that a word-level model can use a larger stride than a sub-word level model while maintaining accuracy. This makes the model more efficient both for training and inference.

Author Information

Ronan Collobert (Facebook AI Research)
Awni Hannun (Facebook AI Research)
Gabriel Synnaeve (Facebook AI Research)

More from the Same Authors