Skip to yearly menu bar Skip to main content


Poster

The Many Shapley Values for Model Explanation

Mukund Sundararajan · Amir Najmi

Keywords: [ Accountability, Transparency and Interpretability ] [ Game Theory and Mechanism Design ]


Abstract:

The Shapley value has become the basis for several methods that attribute the prediction of a machine-learning model on an input to its base features. The use of the Shapley value is justified by citing the uniqueness result from~\cite{Shapley53}, which shows that it is the only method that satisfies certain good properties (\emph{axioms}). There are, however, a multiplicity of ways in which the Shapley value is operationalized for model explanation. These differ in how they reference the model, the training data, and the explanation context. Hence they differ in output, rendering the uniqueness result inapplicable. Furthermore, the techniques that rely on they training data produce non-intuitive attributions, for instance unused features can still receive attribution.

In this paper, we use the axiomatic approach to study the differences between some of the many operationalizations of the Shapley value for attribution. We discuss a technique called Baseline Shapley (BShap), provide a proper uniqueness result for it, and contrast it with two other techniques from prior literature, Integrated Gradients~\cite{STY17} and Conditional Expectation Shapley~\cite{Lundberg2017AUA}.

Chat is not available.