Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: ICML Workshop on Algorithmic Recourse

Feature Attribution and Recourse via Probabilistic Contrastive Counterfactuals


Abstract:

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work has focused on two main approaches: (1)
Attribution of responsibility for an algorithm’s decisions to its inputs, wherein responsibility is typically approached as a purely associational concept that can lead to misleading conclusions. (2) Generating counterfactual explanations and recourse, where these explanations are typically obtained by considering the smallest perturbation in an algorithm’s input that can lead to the algorithm’s desired outcome. however, these perturbations may not translate to real-world interventions. In this paper, we propose a principled and novel causality-based approach for explaining black-box decision-making systems that exploit {\em probabilistic contrastive counterfactuals} to provide a unifying framework to generate wide ranges of global, local and contextual explanations that provide insights into what causes an algorithm’s decisions, and generate actionable recourse translatable into real-world interventions.