Timezone: »
Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.
Author Information
Weili Nie (Rice University)
Yang Zhang (Rice University)
Ankit Patel (Rice University, Baylor College of Medicine)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations »
Fri. Jul 13th 03:40 -- 03:50 PM Room K1
More from the Same Authors
-
2020 Poster: Semi-Supervised StyleGAN for Disentanglement Learning »
Weili Nie · Tero Karras · Animesh Garg · Shoubhik Debnath · Anjul Patney · Ankit Patel · Anima Anandkumar