Poster
Bias Also Matters: Bias Attribution for Deep Neural Network Explanation
Shengjie Wang · Tianyi Zhou · Jeff Bilmes
Pacific Ballroom #147
Keywords: [ Interpretability ]
The gradient of a deep neural network (DNN) w.r.t. the input provides information that can be used to explain the output prediction in terms of the input features and has been widely studied to assist in interpreting DNNs. In a linear model (i.e., g(x) = wx + b), the gradient corresponds to the weights w. Such a model can reasonably locally-linearly approximate a smooth nonlinear DNN, and hence the weights of this local model are the gradient. The bias b, however, is usually overlooked in attribution methods. In this paper, we observe that since the bias in a DNN also has a non-negligible contribution to the correctness of predictions, it can also play a significant role in understanding DNN behavior. We propose a backpropagation-type algorithm “bias back-propagation (BBp)” that starts at the output layer and iteratively attributes the bias of each layer to its input nodes as well as combining the resulting bias term of the previous layer. Together with the backpropagation of the gradient generating w, we can fully recover the locally linear model g(x) = wx + b. In experiments, we show that BBp can generate complementary and highly interpretable explanations.
Live content is unavailable. Log in and register to view live content