Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Hardware-aware efficient training (HAET)

TrimBERT: Tailoring BERT for Trade-offs

Sharath Nittur Sridhar · Anthony Sarah · Sairam Sundaresan


Abstract:

Models based on BERT have been extremely successful in solving a variety of natural language processing (NLP) tasks. Unfortunately, many of these large models require a great deal of computational resources and/or time for pre-training and fine-tuning which limits wider adoptability. While self-attention layers have been well-studied, a strong justification for inclusion of the intermediate layers which follow them remains missing in the literature. In this work, we show that reducing the number of intermediate layers in BERT-Base results in minimal fine-tuning accuracy loss of downstream tasks while significantly decreasing model size and training time. We further mitigate two key bottlenecks, by replacing softmax operations in the self-attention layers with a computationally simpler alternative and removing half of all layernorm operations. This further increases the throughput while maintaining a high level of fine-tuning accuracy.

Chat is not available.