Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Understanding the Size of the Feature Importance Disagreement Problem in Real-World Data

Aniek Markus · Egill Fridgeirsson · Jan Kors · Katia Verhamme · Jenna Reps · Peter Rijnbeek

Keywords: [ Variable importance ] [ LOCO ] [ Shapley values ] [ Permutation FI ] [ Evaluating explanations ] [ Explainable AI ]


Abstract:

Feature importance can be used to gain insight in prediction models. However, different feature importance methods might result in different generated explanations, which has recently been coined as the explanation disagreement problem. Little is known about the size of the disagreement problem in real-world data. Such disagreements are harmful in practice as conflicting explanations only make prediction models less transparent to endusers, which contradicts the main goal of these methods. Hence, it is important to empirically analyze and understand the feature importance disagreement problem in real-world data. We present a novel evaluation framework to measure the influence of different elements of data complexity on the size of the disagreement problem by modifying real-world data. We investigate the feature importance disagreement problem in two datasets from the Dutch general practitioners database IPCI and two open-source datasets.

Chat is not available.