Timezone: »
Despite the impressive performance recently achieved by automatic speech recognition (ASR), we observe two primary challenges that hinder its broader applications: (1) The difficulty of introducing scalability into the model to support more languages with limited training, inference, and storage overhead; (2) The low-resource adaptation ability that enables effective low-resource adaptation while avoiding over fitting and catastrophic forgetting issues. Inspired by recent findings, we hypothesize that we can address the above challenges with modules widely shared across languages. To this end, we propose an ASR framework, dubbed Master-ASR, that, for the first time, simultaneously achieves strong multilingual scalability and low-resource adaptation ability thanks to its modularize-then-assemble strategy. Specifically, Master-ASR learns a small set of generalizable sub-modules and adaptively assembles them for different languages to reduce the multilingual overhead and enable effective knowledge transfer for low-resource adaptation. Extensive experiments and visualizations demonstrate that Master-ASR can effectively discover language similarity and improve multilingual and low-resource ASR performance over state-of-the-art (SOTA) methods, e.g., under multilingual-ASR, our framework achieves a 0.13∼2.41 lower character error rate (CER) with 30% smaller inference overhead over SOTA solutions on multilingual ASR and a comparable CER with nearly 100 times fewer trainable parameters over SOTA solutions on low-resource tuning, respectively.
Author Information
Zhongzhi Yu (Georgia Institute of Technology)
Yang Zhang (MIT-IBM Watson AI Lab)
Kaizhi Qian (MIT-IBM Watson AI Lab)
Cheng Wan (Georgia Institute of Technology)
Yonggan Fu (Georgia Institute of Technology)
Yongan Zhang (Rice University)
Yingyan (Celine) Lin (Georgia Tech)
More from the Same Authors
-
2023 Poster: NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations »
Yonggan Fu · Ye Yuan · Souvik Kundu · Shang Wu · Shunyao Zhang · Yingyan (Celine) Lin -
2023 Poster: Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models »
Guanhua Zhang · Jiabao Ji · Yang Zhang · Mo Yu · Tommi Jaakkola · Shiyu Chang -
2023 Poster: PromptBoosting: Black-Box Text Classification with Ten Forward Passes »
Bairu Hou · Joe O'Connor · Jacob Andreas · Shiyu Chang · Yang Zhang -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers »
Kaizhi Qian · Yang Zhang · Heting Gao · Junrui Ni · Cheng-I Lai · David Cox · Mark Hasegawa-Johnson · Shiyu Chang -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers »
Kaizhi Qian · Yang Zhang · Heting Gao · Junrui Ni · Cheng-I Lai · David Cox · Mark Hasegawa-Johnson · Shiyu Chang -
2022 Poster: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2022 Poster: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Spotlight: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Spotlight: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2021 Poster: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Poster: Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference »
Yonggan Fu · Qixuan Yu · Meng Li · Vikas Chandra · Yingyan Lin -
2021 Spotlight: Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference »
Yonggan Fu · Qixuan Yu · Meng Li · Vikas Chandra · Yingyan Lin -
2021 Oral: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Poster: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Spotlight: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2020 Poster: Invariant Rationalization »
Shiyu Chang · Yang Zhang · Mo Yu · Tommi Jaakkola -
2020 Poster: Unsupervised Speech Decomposition via Triple Information Bottleneck »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Mark Hasegawa-Johnson · David Cox -
2020 Poster: AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks »
Yonggan Fu · Wuyang Chen · Haotao Wang · Haoran Li · Yingyan Lin · Zhangyang “Atlas” Wang -
2019 Poster: AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Xuesong Yang · Mark Hasegawa-Johnson -
2019 Oral: AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Xuesong Yang · Mark Hasegawa-Johnson