Timezone: »
The purported "black box" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its `reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: \url{http://goo.gl/qKb7pL}, code: \url{http://goo.gl/RM8jvH}.
Author Information
Avanti Shrikumar (Stanford University)
Peyton Greenside (Stanford University)
Anshul Kundaje (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Learning Important Features Through Propagating Activation Differences »
Mon. Aug 7th 01:24 -- 01:42 AM Room Darling Harbour Theatre
More from the Same Authors
-
2020 Poster: Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation »
Amr Mohamed Alexandari · Anshul Kundaje · Avanti Shrikumar