Timezone: »
Knowledge distillation, i.e., one classifier being trained on the outputs of another classifier, is an empirically very successful technique for knowledge transfer between classifiers. It has even been observed that classifiers learn much faster and more reliably if trained with the outputs of another classifier as soft labels, instead of from ground truth data. So far, however, there is no satisfactory theoretical explanation of this phenomenon. In this work, we provide the first insights into the working mechanisms of distillation by studying the special case of linear and deep linear classifiers. Specifically, we prove a generalization bound that establishes fast convergence of the expected risk of a distillation-trained linear classifier. From the bound and its proof we extract three key factors that determine the success of distillation: * data geometry -- geometric properties of the data distribution, in particular class separation, has a direct influence on the convergence speed of the risk; * optimization bias -- gradient descent optimization finds a very favorable minimum of the distillation objective; and * strong monotonicity -- the expected risk of the student classifier always decreases when the size of the training set grows.
Author Information
Mary Phuong (IST Austria)
Christoph H. Lampert (IST Austria)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Towards Understanding Knowledge Distillation »
Wed Jun 12th 09:35 -- 09:40 PM Room Room 201
More from the Same Authors
-
2020 Poster: On the Sample Complexity of Adversarial Multi-Source PAC Learning »
Nikola Konstantinov · Elias Frantar · Dan Alistarh · Christoph H. Lampert -
2019 Poster: Robust Learning from Untrusted Sources »
Nikola Konstantinov · Christoph H. Lampert -
2019 Oral: Robust Learning from Untrusted Sources »
Nikola Konstantinov · Christoph H. Lampert -
2018 Poster: Learning equations for extrapolation and control »
Subham S Sahoo · Christoph H. Lampert · Georg Martius -
2018 Oral: Learning equations for extrapolation and control »
Subham S Sahoo · Christoph H. Lampert · Georg Martius -
2018 Poster: Data-Dependent Stability of Stochastic Gradient Descent »
Ilja Kuzborskij · Christoph H. Lampert -
2018 Oral: Data-Dependent Stability of Stochastic Gradient Descent »
Ilja Kuzborskij · Christoph H. Lampert -
2017 Poster: PixelCNN Models with Auxiliary Variables for Natural Image Modeling »
Alexander Kolesnikov · Christoph H. Lampert -
2017 Poster: Multi-task Learning with Labeled and Unlabeled Tasks »
Anastasia Pentina · Christoph H. Lampert -
2017 Talk: Multi-task Learning with Labeled and Unlabeled Tasks »
Anastasia Pentina · Christoph H. Lampert -
2017 Talk: PixelCNN Models with Auxiliary Variables for Natural Image Modeling »
Alexander Kolesnikov · Christoph H. Lampert