Timezone: »
Several recent studies have elucidated why knowledge distillation (KD) improves model performance. However, few have researched the other advantages of KD in addition to its improving model performance. In this study, we have attempted to show that KD enhances the interpretability as well as the accuracy of models. We measured the number of concept detectors identified in network dissection for a quantitative comparison of model interpretability. We attributed the improvement in interpretability to the class-similarity information transferred from the teacher to student models. First, we confirmed the transfer of class-similarity information from the teacher to student model via logit distillation. Then, we analyzed how class-similarity information affects model interpretability in terms of its presence or absence and degree of similarity information. We conducted various quantitative and qualitative experiments and examined the results on different datasets, different KD methods, and according to different measures of interpretability. Our research showed that KD models by large models could be used more reliably in various fields. The code is available at https://github.com/Rok07/KD_XAI.git.
Author Information
Hyeongrok Han (Seoul National University)
Siwon Kim (Seoul National University)
Hyun-Soo Choi (Seoul National University of Science and Technology)
Sungroh Yoon (Seoul National University)
More from the Same Authors
-
2023 : De-stereotyping Text-to-image Models through Prompt Tuning »
Eunji Kim · Siwon Kim · Chaehun Shin · Sungroh Yoon -
2023 Poster: Improving Visual Prompt Tuning for Self-supervised Vision Transformers »
Seungryong Yoo · Eunji Kim · Dahuin Jung · JUNGBEOM LEE · Sungroh Yoon -
2023 Poster: Probabilistic Concept Bottleneck Models »
Eunji Kim · Dahuin Jung · Sangha Park · Siwon Kim · Sungroh Yoon -
2022 Poster: AutoSNN: Towards Energy-Efficient Spiking Neural Networks »
Byunggook Na · Jisoo Mok · Seongsik Park · Dongjin Lee · Hyeokjun Choe · Sungroh Yoon -
2022 Poster: Dataset Condensation with Contrastive Signals »
Saehyung Lee · SANGHYUK CHUN · Sangwon Jung · Sangdoo Yun · Sungroh Yoon -
2022 Spotlight: Dataset Condensation with Contrastive Signals »
Saehyung Lee · SANGHYUK CHUN · Sangwon Jung · Sangdoo Yun · Sungroh Yoon -
2022 Spotlight: AutoSNN: Towards Energy-Efficient Spiking Neural Networks »
Byunggook Na · Jisoo Mok · Seongsik Park · Dongjin Lee · Hyeokjun Choe · Sungroh Yoon -
2022 Poster: Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance »
Heeseung Kim · Sungwon Kim · Sungroh Yoon -
2022 Spotlight: Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance »
Heeseung Kim · Sungwon Kim · Sungroh Yoon -
2022 Poster: Confidence Score for Source-Free Unsupervised Domain Adaptation »
Jonghyun Lee · Dahuin Jung · Junho Yim · Sungroh Yoon -
2022 Spotlight: Confidence Score for Source-Free Unsupervised Domain Adaptation »
Jonghyun Lee · Dahuin Jung · Junho Yim · Sungroh Yoon -
2019 Poster: FloWaveNet : A Generative Flow for Raw Audio »
Sungwon Kim · Sang-gil Lee · Jongyoon Song · Jaehyeon Kim · Sungroh Yoon -
2019 Oral: FloWaveNet : A Generative Flow for Raw Audio »
Sungwon Kim · Sang-gil Lee · Jongyoon Song · Jaehyeon Kim · Sungroh Yoon -
2019 Poster: HexaGAN: Generative Adversarial Nets for Real World Classification »
Uiwon Hwang · Dahuin Jung · Sungroh Yoon -
2019 Oral: HexaGAN: Generative Adversarial Nets for Real World Classification »
Uiwon Hwang · Dahuin Jung · Sungroh Yoon