Timezone: »

 
Poster
Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Chen Liang · Simiao Zuo · Qingru Zhang · Pengcheng He · Weizhu Chen · Tuo Zhao

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #526

Layer-wise distillation is a powerful tool to compress large models (i.e. teacher models) into small ones (i.e., student models). The student distills knowledge from the teacher by mimicking the hidden representations of the teacher at every intermediate layer. However, layer-wise distillation is difficult. Since the student has a smaller model capacity than the teacher, it is often under-fitted. Furthermore, the hidden representations of the teacher contain redundant information that the student does not necessarily need for the target task's learning. To address these challenges, we propose a novel Task-aware layEr-wise Distillation (TED). TED designs task-aware filters to align the hidden representations of the student and the teacher at each layer. The filters select the knowledge that is useful for the target task from the hidden representations. As such, TED reduces the knowledge gap between the two models and helps the student to fit better on the target task. We evaluate TED in two scenarios: continual pre-training and fine-tuning. TED demonstrates significant and consistent improvements over existing distillation methods in both scenarios. Code is available at https://github.com/cliang1453/task-aware-distillation.

Author Information

Chen Liang (Georgia Institute of Technology)
Simiao Zuo (Georgia Institute of Technology)
Qingru Zhang (Georgia Institute of Technology)

Qingru Zhang is a Ph.D. student at Georgia Tech. His research mainly focuses on developing principled learning algorithms with an emphasis on language models and graph representation learning.

Pengcheng He (Microsoft)
Weizhu Chen (Microsoft)
Tuo Zhao (Georgia Tech)

More from the Same Authors