Skip to yearly menu bar Skip to main content


Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge

Laura Rieger · Chandan Singh · William Murdoch · Bin Yu


Keywords: [ Accountability, Transparency and Interpretability ] [ Fairness, Equity and Justice ]


For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods to increase the predictive accuracy of a deep learning model. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by inserting domain knowledge into the model via explanations. We demonstrate the ability of CDEP to increase performance on an array of toy and real datasets.

Chat is not available.