Poster
 in 
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models
                        
                    
                    Early Attentive Sparsification Accelerates Neural Speech Transcription
Zifei Xu · Sayeh Sharify · Hesham Mostafa · Tristan Webb · Wanzin Yazar · Xin Wang
                        Abstract:
                        
                            Transformer-based neural speech processing has achieved state-of-the-art performance.  Since speech audio signals are known to be highly compressible, here we seek to accelerate neural speech transcription by time-domain signal sparsification early in the neural encoding stage, taking advantage of the interpretability of the self-attention mechanism in transformer audio encoders. With the Whisper family of models, we perform a systematic architecture search over the joint space of sparsification stage (a certain encoder layer) and compression ratio (sparsity). We found that the best resulting solutions under 1\% accuracy degradation choose to sparsify the hidden state to 40-60% sparsity at an early encoding stage, and thereby achieve up to $1.6\times$ runtime acceleration in English speech transcription tasks on Nvidia GPUs without any fine-tuning.
                        
                    
                    
                Chat is not available.
            
        Successful Page Load