Timezone: »
This paper aims to theoretically analyze the complexity of feature transformations encoded in piecewise linear DNNs with ReLU layers. We propose metrics to measure three types of complexities of transformations based on the information theory. We further discover and prove the strong correlation between the complexity and the disentanglement of transformations. Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN's complexity. The proposed metrics can also be used as a loss to learn a DNN with the minimum complexity, which also controls the over-fitting level of the DNN and influences adversarial robustness, adversarial transferability, and knowledge consistency. Comprehensive comparative studies have provided new perspectives to understand the DNN. The code is released at https://github.com/sjtu-XAI-lab/transformation-complexity.
Author Information
Jie Ren (Shanghai Jiao Tong University)
Mingjie Li (Shanghai Jiao Tong University)
Meng Zhou (Carnegie Mellon University)
Shih-Han Chan (University of California San Diego)
Quanshi Zhang (Shanghai Jiao Tong University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Wed. Jul 20th 02:40 -- 02:45 PM Room Room 318 - 320
More from the Same Authors
-
2021 : Poster Session Test »
Jie Ren · -
2023 Poster: Bayesian Neural Networks Avoid Encoding Sensitive and Complex Concepts »
Qihan Ren · Huiqi Deng · Yunuo Chen · Siyu Lou · Quanshi Zhang -
2023 Poster: HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation »
Lu Chen · Siyu Lou · Keyan Zhang · JIN HUANG · Quanshi Zhang -
2023 Poster: Problems with Convolution Operations in Frequency Representation »
Ling Tang · Wen Shen · Zhanpeng Zhou · YueFeng Chen · Quanshi Zhang -
2023 Poster: Is There an Emergence of Transferable Concepts in DNNs? »
Mingjie Li · Quanshi Zhang -
2022 Poster: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2022 Spotlight: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2021 : [12:00 - 12:02 PM UTC] Welcome »
Quanshi Zhang -
2021 Poster: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Spotlight: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2020 Expo Talk Panel: Baidu AutoDL: Automated and Interpretable Deep Learning »
Bolei Zhou · Yi Yang · Quanshi Zhang · Dejing Dou · Haoyi Xiong · Jiahui Yu · Humphrey Shi · Linchao Zhu · Xingjian Li -
2019 Poster: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie -
2019 Oral: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie