Timezone: »

 
Workshop
ICML Workshop on Algorithmic Recourse
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez Rodriguez · Isabel Valera · Hima Lakkaraju

Sat Jul 24 04:45 AM -- 01:15 PM (PDT) @ None
Event URL: https://sites.google.com/view/recourse21 »

Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.

Sat 4:45 a.m. - 5:00 a.m.
Welcome and introduction (Remarks)   
Sat 5:00 a.m. - 5:25 a.m.
  

AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable.

To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task?

In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.

I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.

Sat 5:25 a.m. - 5:30 a.m.
Sandra Wachter - Q&A (Q&A)
Sat 5:30 a.m. - 5:55 a.m.
  

One of the most significant findings that we can produce when evaluating recourse in machine learning is that a model has assigned a "prediction without recourse."

Predictions without recourse arise when the optimization problem that we solve to search for recourse actions is infeasible. In practice, the "infeasibility" of this problem shows that a person cannot change their prediction through their actions - i.e., that the model has fixed their prediction based on input variables beyond their control.

In this talk, I will discuss these issues and discuss how we can address them by studying the "feasibility" of recourse. First, I will present reasons for why we should ensure the feasibility of recourse -- i.e., even in settings where we may not wish to provide recourse. Next, I will discuss technical challenges that we must overcome to ensure recourse reliably.

Sat 5:55 a.m. - 6:00 a.m.
Berk Ustun - Q&A (Q&A)
Sat 6:00 a.m. - 6:10 a.m.
  

In social domains, Machine Learning algorithms often prompt individuals to strategically modify their observable attributes to receive more favorable predictions. As a result, the distribution the predictive model is trained on may differ from the one it operates on in deployment. While such distribution shifts, in general, hinder accurate predictions, our work identifies a unique opportunity associated with shifts due to strategic responses: We show that we can use strategic responses effectively to recover causal relationships between the observable features and outcomes we wish to predict. More specifically, we study a game-theoretic model in which a principal deploys a sequence of models to predict an outcome of interest (e.g., college GPA) for a sequence of strategic agents (e.g., college applicants). In response, strategic agents invest efforts and modify their features for better predictions. In such settings, unobserved confounding variables (e.g., family educational background) can influence both an agent's observable features (e.g., high school records) and outcomes (e.g., college GPA). Therefore, standard regression methods (such as OLS) generally produce biased estimators. In order to address this issue, our work establishes a novel connection between strategic responses to machine learning models and instrumental variable (IV) regression, by observing that the sequence of deployed models can be viewed as an instrument that affects agents' observable features but does not directly influence their outcomes. Therefore, two-stage least squares (2SLS) regression can recover the causal relationships between observable features and outcomes.

Sat 6:10 a.m. - 6:20 a.m.
  

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work has focused on two main approaches: (1)
Attribution of responsibility for an algorithm’s decisions to its inputs, wherein responsibility is typically approached as a purely associational concept that can lead to misleading conclusions. (2) Generating counterfactual explanations and recourse, where these explanations are typically obtained by considering the smallest perturbation in an algorithm’s input that can lead to the algorithm’s desired outcome. however, these perturbations may not translate to real-world interventions. In this paper, we propose a principled and novel causality-based approach for explaining black-box decision-making systems that exploit {\em probabilistic contrastive counterfactuals} to provide a unifying framework to generate wide ranges of global, local and contextual explanations that provide insights into what causes an algorithm’s decisions, and generate actionable recourse translatable into real-world interventions.

Sat 6:20 a.m. - 6:30 a.m.
  

Machine learning systems are often used in settings where individuals adapt their features to obtain a desired outcome. In such settings, strategic behavior leads to a sharp loss in model performance in deployment.
In this work, we tackle this problem by learning classifiers that encourage decision subjects to change their features in a way that leads to improvement in both predicted and true outcomes---in other words, the classifier provides recourse to decision subjects as long as the adaptation is constructive.
We do this by framing the dynamics of prediction and adaptation as a two-stage game, and characterize optimal strategies for the model designer and its decision subjects.
In benchmarks on simulated and real-world datasets, we find that our method maintains the accuracy of existing approaches while inducing higher levels of improvement and less manipulation.

Sat 6:30 a.m. - 6:40 a.m.
  

Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fairness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria
relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction.

Sat 6:40 a.m. - 6:45 a.m.
Q&A for Contributed Talks: Part 1 (Q&A)   
Sat 6:45 a.m. - 6:55 a.m.
  

Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open–
source implementations, the decision in favour of certain models is primarily based on what is readily available. Going forward – to guarantee meaningful comparisons across explanation methods – we present CARLA (Counterfactual And Recourse LibrAry), a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models. In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework
for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods. We will open source CARLA and our experimental results on Github, making them available as competitive baselines. We welcome contributions from research groups and practitioners.

Sat 6:55 a.m. - 7:05 a.m.
  

This work presents CounterNet, a novel end-to-end learning framework which integrates the predictive model training and the counterfactual (CF) explanation into a single end-to-end pipeline. Prior CF explanation techniques rely on solving separate time-intensive optimization problems to find CF examples for every single input instance, and also suffer from the misalignment of objectives between model predictions and explanations, which leads to significant shortcomings in the quality of CF explanations. CounterNet, on the other hand, integrates both prediction and explanation in the same framework, which enable the optimization of the CF example generation only once together with the predictive model. We propose a novel variant of back-propagation which can help in effectively training CounterNet's network. Finally, we conduct extensive experiments on multiple real-world datasets. Our results show that CounterNet generates high-quality predictions, and corresponding CF examples (with high validity) for any new input instance significantly faster than existing state-of-the-art baselines.

Sat 7:05 a.m. - 7:15 a.m.
  

Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions.
However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome.
In this work, we propose a method for generating counterfactual (CF) explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes.
Using only edge deletions, we find that our method can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy.
This indicates that our method primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations.

Sat 7:15 a.m. - 7:25 a.m.
  

As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (e.g., dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, Robust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out detailed theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) we derive a lower bound on the probability of invalidation of recourses generated by existing approaches which are not robust to model shifts. 2) we prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation demonstrates the efficacy of the proposed framework and supports our theoretical findings.

Sat 7:25 a.m. - 7:30 a.m.
Q&A for Contributed Talks: Part 2 (Q&A)   
Sat 7:30 a.m. - 8:30 a.m.
Poster Session 1 (Poster Session)  link »
Sat 8:30 a.m. - 9:30 a.m.
Solon Barocas, Ruth Byrne, Amit Dhurandhar and Alice Xiang - From counterfactual reasoning to re-applying for a loan: How do we connect the dots? (Panel Discussion)   
Sat 9:30 a.m. - 10:30 a.m.
Break
Sat 10:30 a.m. - 10:55 a.m.
  

As humans, we spend much of our time going beyond the here and now. We dwell on the past, long for the future, and ponder how things could have turned out differently. In this talk, I will argue that people's knowledge of the world is organized around causally structured mental models, and that much of human thought can be understood as cognitive operations over these mental models. Specifically, I will highlight the pervasiveness of counterfactual thinking in human cognition. Counterfactuals are critical for how people make causal judgments, how they explain what happened, and how they hold others responsible for their actions. Based on these empirical insights, I will share some thoughts on the relationship between counterfactual thought and algorithmic recourse.

Tobias Gerstenberg
Sat 10:55 a.m. - 11:00 a.m.
Tobias Gerstenberg - Q&A (Q&A)
Sat 11:00 a.m. - 11:25 a.m.
  

This talk presents oversimplified but practical concepts that practitioners and researchers must know in using and developing interpretability methods for algorithmic recourse. The concepts are WATSOP: (W)rongness, (A)track, (T)esting for practitioners, (S)keptics, (O)bjectives, (P)roper evaluations for researchers. While oversimplified, these are the core points that lead the field to success or failure. I’ll provide concrete steps for each and related work how you may apply these concepts to your work.

Been Kim
Sat 11:25 a.m. - 11:30 a.m.
Been Kim - Q&A (Q&A)
Sat 11:30 a.m. - 11:55 a.m.
  

In this talk, I will discuss recent progress and ideas on how to perform fairness analysis using causal lenses.

Sat 11:55 a.m. - 12:00 p.m.
Elias Bareinboim - Q&A (Q&A)
Sat 12:00 p.m. - 1:00 p.m.
Poster Session 2 (Poster Session)  link »

Author Information

Stratis Tsirtsis (MPI-SWS)

Stratis Tsirtsis is a Ph.D. candidate at the Max Planck Institute for Software Systems. He is interested in building machine learning systems to inform decisions about individuals who present strategic behavior.

Amir-Hossein Karimi (Max Planck Institute for Intelligent Systems)

Amir-Hossein is a PhD student at the Max Planck ETH Center for Learning Systems, supervised by Profs. Schölkopf, Valera, and Hofmann. His work focuses on the intersection of causal and explainable machine learning, primarily on the problem of algorithmic recourse, that is, how to help individuals subject to automated algorithmic systems overcome unfavorable predictions.

Ana Lucic (Partnership on AI, University of Amsterdam)

Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.

Manuel Gomez Rodriguez (MPI-SWS)

Manuel Gomez Rodriguez is a faculty at Max Planck Institute for Software Systems. Manuel develops human-centric machine learning models and algorithms for the analysis, modeling and control of social, information and networked systems. He has received several recognitions for his research, including an outstanding paper award at NeurIPS’13 and a best research paper honorable mention at KDD’10 and WWW’17. He has served as track chair for FAT* 2020 and as area chair for every major conference in machine learning, data mining and the Web. Manuel has co-authored over 50 publications in top-tier conferences (NeurIPS, ICML, WWW, KDD, WSDM, AAAI) and journals (PNAS, Nature Communications, JMLR, PLOS Computational Biology). Manuel holds a BS in Electrical Engineering from Carlos III University, a MS and PhD in Electrical Engineering from Stanford University, and has received postdoctoral training at the Max Planck Institute for Intelligent Systems.

Isabel Valera (Saarland University)

Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.

Hima Lakkaraju (Harvard)

More from the Same Authors