Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi

Thu Jun 13th 10:10 -- 10:15 AM @ Grand Ballroom

Current methods to interpret deep learning models by generating saliency maps generally rely on two key assumptions. First, they use first-order approximations of the loss function neglecting higher-order terms such as the loss curvatures. Second, they evaluate each feature's importance in isolation, ignoring their inter-dependencies. In this work, we study the effect of relaxing these two assumptions. First, by characterizing a closed-form formula for the Hessian matrix of a deep ReLU network, we prove that, for a classification problem with a large number of classes, if an input has a high confidence classification score, the inclusion of the Hessian term has small impacts in the final solution. We prove this result by showing that in this case the Hessian matrix is approximately of rank one and its leading eigenvector is almost parallel to the gradient of the loss function. Our empirical experiments on ImageNet samples are consistent with our theory. This result can have implications in other related problems such as adversarial examples as well. Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term. We use the $L0-L1$ relaxation technique along with the proximal gradient descent to have an efficient computation of group feature importance scores. Our empirical results indicate that considering group features can improve deep learning interpretation significantly.

Author Information

Sahil Singla (University of Maryland)
Eric Wallace (Allen Institute for Artificial Intelligence)
Shi Feng (University of Maryland)
Soheil Feizi (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors