Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: “Could it have been different?” Counterfactuals in Minds and Machines

Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ

Eoin Delaney · Arjun Pakrashi · Derek Greene · Mark Keane


Abstract:

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. While 100+ counterfactual methods exist in the literature, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.

Chat is not available.