Skip to yearly menu bar Skip to main content


Poster

Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation

Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee

Hall E #319

Keywords: [ DL: Algorithms ]


Abstract:

Ensembles of deep neural networks have demonstrated superior performance, but their heavy computational cost hinders applying them for resource-limited environments. It motivates distilling knowledge from the ensemble teacher into a smaller student network, and there are two important design choices for this ensemble distillation: 1) how to construct the student network, and 2) what data should be shown during training. In this paper, we propose a weight averaging technique where a student with multiple subnetworks is trained to absorb the functional diversity of ensemble teachers, but then those subnetworks are properly averaged for inference, giving a single student network with no additional inference cost. We also propose a perturbation strategy that seeks inputs from which the diversities of teachers can be better transferred to the student. Combining these two, our method significantly improves upon previous methods on various image classification tasks.

Chat is not available.