Skip to yearly menu bar Skip to main content


Poster

Incorporating Information into Shapley Values: Reweighting via a Maximum Entropy Approach

Darya Biparva · Donatello Materassi


Abstract:

In this article, we start by drawing a parallel between Shapley values as adopted in the area of eXplainable AI and some fundamental results from the area of graphical models.Specifically, we notice that both the marginal contributions needed for the computation of Shapley values and the graph produced by Pearl-Verma theorem rely on the choice of an ordering of the variables.For Shapley values, the marginal contributions are averaged over all orderings, while in causal inference methods/graphical models, the typical approach is to select orderings producing a graph with a minimal number of edges.We reconcile both approaches reinterpreting them from a maximum entropy perspective.Namely, Shapley values assume no prior knowledge about the orderings and treat them as equally likely.Conversely, causal inference approaches apply a form of Occam's razor and consider only orderings producing the simplest explanatory graphs.While Shapley values do not incorporate any available information about the model, we find that the blind application of Occam's razor also does not produce fully satisfactory explanations.Hence, we propose a variation of Shapley values based on entropy maximization to appropriately incorporate prior information about the model.

Live content is unavailable. Log in and register to view live content