Timezone: »

A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions
Daniel Lundstrom · Tianjian Huang · Meisam Razaviyayn

Tue Jul 19 10:45 AM -- 10:50 AM (PDT) @ Ballroom 3 & 4

As deep learning (DL) efficacy grows, concerns for poor model explainability grow also. Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction. Among various methods, Integrated Gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfy said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key differences between IG function spaces and the supporting literature’s function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, non-decreasing positivity, the uniqueness claims can be established. 2) We address the question of input sensitivity by identifying function classes where IG is/is not Lipschitz in the attributed input. 3) We show that axioms for single-baseline methods have analogous properties for methods with probability distribution baselines. 4) We introduce a computationally efficient method of identifying internal neurons that contribute to specified regions of an IG attribution map. Finally, we present experimental results validating this method.

Author Information

Daniel Lundstrom (University of Southern California)
Tianjian Huang (University of Southern California)
Meisam Razaviyayn (University of southern California)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors