Timezone: »
This paper presents a method to explain how the information of each input variable is gradually discarded during the forward propagation in a deep neural network (DNN), which provides new perspectives to explain DNNs. We define two types of entropy-based metrics, i.e. (1) the discarding of pixel-wise information used in the forward propagation, and (2) the uncertainty of the input reconstruction, to measure input information contained by a specific layer from two perspectives. Unlike previous attribution metrics, the proposed metrics ensure the fairness of comparisons between different layers of different DNNs. We can use these metrics to analyze the efficiency of information processing in DNNs, which exhibits strong connections to the performance of DNNs. We analyze information discarding in apixel-wise manner, which is different from the information bottleneck theory measuring feature information w.r.t. the sample distribution. Experiments have shown the effectiveness of our metrics in analyzing classic DNNs and explaining existing deep-learning techniques. The code is available at https://github.com/haotianSustc/deepinfo.
Author Information
Haotian Ma (Southern University of Science and Technology)
Hao Zhang (Shanghai Jiao Tong University)
Fan Zhou (Shanghai Jiao Tong University)
Yinqing Zhang (Shanghai Jiao Tong University)
Quanshi Zhang (Shanghai Jiao Tong University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Tue. Jul 19th through Wed the 20th Room Hall E #909
More from the Same Authors
-
2023 Poster: Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts »
Qihan Ren · Huiqi Deng · Yunuo Chen · Siyu Lou · Quanshi Zhang -
2023 Poster: HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation »
Lu Chen · Siyu Lou · Keyan Zhang · JIN HUANG · Quanshi Zhang -
2023 Poster: Defects of Convolutional Decoder Networks in Frequency Representation »
Ling Tang · Wen Shen · Zhanpeng Zhou · YueFeng Chen · Quanshi Zhang -
2023 Poster: Does a Neural Network Really Encode Symbolic Concepts? »
Mingjie Li · Quanshi Zhang -
2022 Poster: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2022 Spotlight: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2021 : [12:00 - 12:02 PM UTC] Welcome »
Quanshi Zhang -
2021 Poster: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Spotlight: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2020 Expo Talk Panel: Baidu AutoDL: Automated and Interpretable Deep Learning »
Bolei Zhou · Yi Yang · Quanshi Zhang · Dejing Dou · Haoyi Xiong · Jiahui Yu · Humphrey Shi · Linchao Zhu · Xingjian Li -
2019 Poster: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie -
2019 Oral: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie