Timezone: »

 
Spotlight
UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data
Chengyi Wang · Yu Wu · Yao Qian · Kenichi Kumatani · Shujie Liu · Furu Wei · Michael Zeng · Xuedong Huang

Thu Jul 22 05:40 PM -- 05:45 PM (PDT) @ None

In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both labeled and unlabeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4\% and 26.9\% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also verified on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6\% against the previous approach.

Author Information

Chengyi Wang (Nankai University)
Yu Wu (Microsoft Research)
Yao Qian (Microsoft)
Kenichi Kumatani (Microsoft)
Shujie Liu (Microsoft Research Asia)
Furu Wei (Microsoft Research Asia)
Michael Zeng (Microsoft)
Xuedong Huang (Microsoft)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors