Skip to yearly menu bar Skip to main content


Poster

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

Darius Afchar · Vincent Guigue · Romain Hennequin

Virtual

Keywords: [ Fairness, Accountability, and Transparency ] [ Social Aspects of Machine Learning ]


Abstract:

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. Task-dependent by nature, precise definitions of "relevance" encountered in the literature are however not always consistent. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.

Chat is not available.