Skip to yearly menu bar Skip to main content


Poster

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

Weili Nie · Yang Zhang · Ankit Patel

Hall B #19

Abstract:

Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.

Live content is unavailable. Log in and register to view live content