Timezone: »
Current saliency map interpretations for neural networks generally rely on two key assumptions. First, they use first-order approximations of the loss function, neglecting higher-order terms such as the loss curvature. Second, they evaluate each feature’s importance in isolation, ignoring feature interdependencies. This work studies the effect of relaxing these two assumptions. First, we characterize a closed-form formula for the input Hessian matrix of a deep ReLU network. Using this formula, we show that, for classification problems with many classes, if a prediction has high probability then including the Hessian term has a small impact on the interpretation. We prove this result by demonstrating that these conditions cause the Hessian matrix to be approximately rank one and its leading eigenvector to be almost parallel to the gradient of the loss. We empirically validate this theory by interpreting ImageNet classifiers. Second, we incorporate feature interdependencies by calculating the importance of group-features using a sparsity regularization term. We use an L0 − L1 relaxation technique along with proximal gradient descent to efficiently compute group-feature importance values. Our empirical results show that our method significantly improves deep learning interpretations.
Author Information
Sahil Singla (University of Maryland)
Eric Wallace (Allen Institute for Artificial Intelligence)
Shi Feng (University of Maryland)
Soheil Feizi (University of Maryland)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Thu. Jun 13th 05:10 -- 05:15 PM Room Grand Ballroom
More from the Same Authors
-
2022 : Towards Better Understanding of Self-Supervised Representations »
Neha Mukund Kalibhat · Kanika Narang · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication »
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang -
2023 Poster: Run-off Election: Improved Provable Defense against Data Poisoning Attacks »
Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi -
2023 Poster: Identifying Interpretable Subspaces in Image Representations »
Neha Mukund Kalibhat · Shweta Bhardwaj · C. Bayan Bruss · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2023 Poster: Text-To-Concept (and Back) via Cross-Model Alignment »
Mazda Moayeri · Keivan Rezaei · Maziar Sanjabi · Soheil Feizi -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Toward Efficient Robust Training against Union of Lp Threat Models »
Gaurang Sriramanan · Maharshi Gor · Soheil Feizi -
2022 Poster: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Poster: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Spotlight: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Spotlight: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2021 : Invited Talk 6: Towards Understanding Foundations of Robust Learning »
Soheil Feizi -
2021 Poster: Calibrate Before Use: Improving Few-shot Performance of Language Models »
Tony Z. Zhao · Eric Wallace · Shi Feng · Dan Klein · Sameer Singh -
2021 Oral: Calibrate Before Use: Improving Few-shot Performance of Language Models »
Tony Z. Zhao · Eric Wallace · Shi Feng · Dan Klein · Sameer Singh -
2021 Poster: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2021 Poster: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Spotlight: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Oral: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2020 Poster: Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness »
Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi -
2020 Poster: Second-Order Provable Defenses against Adversarial Attacks »
Sahil Singla · Soheil Feizi -
2020 Poster: On Second-Order Group Influence Functions for Black-Box Predictions »
Samyadeep Basu · Xuchen You · Soheil Feizi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi