Skip to yearly menu bar Skip to main content


Poster

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Value Approximation

Marco Ancona · Cengiz Oztireli · Markus Gross

Pacific Ballroom #63

Keywords: [ Interpretability ] [ Game Theory and Mechanism Design ] [ Algorithms ]


Abstract:

The problem of explaining the behavior of deep neural networks has recently gained a lot of attention. While several attribution methods have been proposed, most come without strong theoretical foundations, which raises questions about their reliability. On the other hand, the literature on cooperative game theory suggests Shapley values as a unique way of assigning relevance scores such that certain desirable properties are satisfied. Unfortunately, the exact evaluation of Shapley values is prohibitively expensive, exponential in the number of input features. In this work, by leveraging recent results on uncertainty propagation, we propose a novel, polynomial-time approximation of Shapley values in deep neural networks. We show that our method produces significantly better approximations of Shapley values than existing state-of-the-art attribution methods.

Live content is unavailable. Log in and register to view live content