Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Hardware-aware efficient training (HAET)

Efficient Fine-Tuning of Compressed Language Models with Learners

Danilo Vucetic · Mohammadreza Tayaranian · Maryam Zia · James J. Clark · Brett Meyer · Warren Gross


Abstract:

Fine-tuning BERT-based models is resource-intensive in memory, computation, and time. While many prior works aim to improve inference efficiency via compression techniques, e.g., pruning, these works do not explicitly address the computational challenges of training to downstream tasks. We introduce the Learner module, a novel method for fine-tuning that exploits the overparameterization of pre-trained language models to gain benefits in convergence speed and resource utilization. Learner modules navigate the double bind of 1) training efficiently by fine-tuning a subset of parameters, and 2) training effectively by ensuring quick convergence and high metric scores. Our results on DistilBERT demonstrate that learners perform on par with or surpass the baselines. Learners train 7x fewer parameters than state-of-the-art methods on GLUE. On CoLA, learners fine-tune 20% faster, and have significantly lower resource utilization.

Chat is not available.