Timezone: »
Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.
Sat 4:45 a.m. - 5:00 a.m.
|
Welcome and introduction
(
Remarks
)
SlidesLive Video » |
🔗 |
Sat 5:00 a.m. - 5:25 a.m.
|
Sandra Wachter - How AI weakens legal recourse and remedies
(
Keynote
)
SlidesLive Video » AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable. To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task? In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe. I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world. |
🔗 |
Sat 5:25 a.m. - 5:30 a.m.
|
Sandra Wachter - Q&A
(
Q&A
)
|
🔗 |
Sat 5:30 a.m. - 5:55 a.m.
|
Berk Ustun - On Predictions without Recourse
(
Keynote
)
SlidesLive Video » One of the most significant findings that we can produce when evaluating recourse in machine learning is that a model has assigned a "prediction without recourse." Predictions without recourse arise when the optimization problem that we solve to search for recourse actions is infeasible. In practice, the "infeasibility" of this problem shows that a person cannot change their prediction through their actions - i.e., that the model has fixed their prediction based on input variables beyond their control. In this talk, I will discuss these issues and discuss how we can address them by studying the "feasibility" of recourse. First, I will present reasons for why we should ensure the feasibility of recourse -- i.e., even in settings where we may not wish to provide recourse. Next, I will discuss technical challenges that we must overcome to ensure recourse reliably. |
🔗 |
Sat 5:55 a.m. - 6:00 a.m.
|
Berk Ustun - Q&A
(
Q&A
)
|
🔗 |
Sat 6:00 a.m. - 6:10 a.m.
|
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
(
Contributed talk
)
SlidesLive Video » In social domains, Machine Learning algorithms often prompt individuals to strategically modify their observable attributes to receive more favorable predictions. As a result, the distribution the predictive model is trained on may differ from the one it operates on in deployment. While such distribution shifts, in general, hinder accurate predictions, our work identifies a unique opportunity associated with shifts due to strategic responses: We show that we can use strategic responses effectively to recover causal relationships between the observable features and outcomes we wish to predict. More specifically, we study a game-theoretic model in which a principal deploys a sequence of models to predict an outcome of interest (e.g., college GPA) for a sequence of strategic agents (e.g., college applicants). In response, strategic agents invest efforts and modify their features for better predictions. In such settings, unobserved confounding variables (e.g., family educational background) can influence both an agent's observable features (e.g., high school records) and outcomes (e.g., college GPA). Therefore, standard regression methods (such as OLS) generally produce biased estimators. In order to address this issue, our work establishes a novel connection between strategic responses to machine learning models and instrumental variable (IV) regression, by observing that the sequence of deployed models can be viewed as an instrument that affects agents' observable features but does not directly influence their outcomes. Therefore, two-stage least squares (2SLS) regression can recover the causal relationships between observable features and outcomes. |
🔗 |
Sat 6:10 a.m. - 6:20 a.m.
|
Feature Attribution and Recourse via Probabilistic Contrastive Counterfactuals
(
Contributed talk
)
SlidesLive Video » There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work has focused on two main approaches: (1) |
🔗 |
Sat 6:20 a.m. - 6:30 a.m.
|
Linear Classifiers that Encourage Constructive Adaptation
(
Contributed talk
)
SlidesLive Video » Machine learning systems are often used in settings where individuals adapt their features to obtain a desired outcome. In such settings, strategic behavior leads to a sharp loss in model performance in deployment. |
🔗 |
Sat 6:30 a.m. - 6:40 a.m.
|
On the Fairness of Causal Algorithmic Recourse
(
Contributed talk
)
SlidesLive Video » Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fairness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria |
🔗 |
Sat 6:40 a.m. - 6:45 a.m.
|
Q&A for Contributed Talks: Part 1
(
Q&A
)
SlidesLive Video » |
🔗 |
Sat 6:45 a.m. - 6:55 a.m.
|
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
(
Contributed talk
)
SlidesLive Video » Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open– |
🔗 |
Sat 6:55 a.m. - 7:05 a.m.
|
CounterNet: End-to-End Training of Counterfactual Aware Predictions
(
Contributed talk
)
SlidesLive Video » This work presents CounterNet, a novel end-to-end learning framework which integrates the predictive model training and the counterfactual (CF) explanation into a single end-to-end pipeline. Prior CF explanation techniques rely on solving separate time-intensive optimization problems to find CF examples for every single input instance, and also suffer from the misalignment of objectives between model predictions and explanations, which leads to significant shortcomings in the quality of CF explanations. CounterNet, on the other hand, integrates both prediction and explanation in the same framework, which enable the optimization of the CF example generation only once together with the predictive model. We propose a novel variant of back-propagation which can help in effectively training CounterNet's network. Finally, we conduct extensive experiments on multiple real-world datasets. Our results show that CounterNet generates high-quality predictions, and corresponding CF examples (with high validity) for any new input instance significantly faster than existing state-of-the-art baselines. |
🔗 |
Sat 7:05 a.m. - 7:15 a.m.
|
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
(
Contributed talk
)
SlidesLive Video » Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. |
🔗 |
Sat 7:15 a.m. - 7:25 a.m.
|
Towards Robust and Reliable Algorithmic Recourse
(
Contributed talk
)
SlidesLive Video » As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (e.g., dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, Robust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out detailed theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) we derive a lower bound on the probability of invalidation of recourses generated by existing approaches which are not robust to model shifts. 2) we prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation demonstrates the efficacy of the proposed framework and supports our theoretical findings. |
🔗 |
Sat 7:25 a.m. - 7:30 a.m.
|
Q&A for Contributed Talks: Part 2
(
Q&A
)
SlidesLive Video » |
🔗 |
Sat 7:30 a.m. - 8:30 a.m.
|
Poster Session 1 ( Poster Session ) link » | 🔗 |
Sat 8:30 a.m. - 9:30 a.m.
|
Solon Barocas, Ruth Byrne, Amit Dhurandhar and Alice Xiang - From counterfactual reasoning to re-applying for a loan: How do we connect the dots?
(
Panel Discussion
)
SlidesLive Video » |
🔗 |
Sat 9:30 a.m. - 10:30 a.m.
|
Break
|
🔗 |
Sat 10:30 a.m. - 10:55 a.m.
|
Tobias Gerstenberg - Going beyond the here and now: Counterfactual simulation in human cognition
(
Keynote
)
SlidesLive Video » As humans, we spend much of our time going beyond the here and now. We dwell on the past, long for the future, and ponder how things could have turned out differently. In this talk, I will argue that people's knowledge of the world is organized around causally structured mental models, and that much of human thought can be understood as cognitive operations over these mental models. Specifically, I will highlight the pervasiveness of counterfactual thinking in human cognition. Counterfactuals are critical for how people make causal judgments, how they explain what happened, and how they hold others responsible for their actions. Based on these empirical insights, I will share some thoughts on the relationship between counterfactual thought and algorithmic recourse. |
Tobias Gerstenberg 🔗 |
Sat 10:55 a.m. - 11:00 a.m.
|
Tobias Gerstenberg - Q&A
(
Q&A
)
|
🔗 |
Sat 11:00 a.m. - 11:25 a.m.
|
Been Kim - Decision makers, practitioners and researchers, we need to talk.
(
Keynote
)
SlidesLive Video » This talk presents oversimplified but practical concepts that practitioners and researchers must know in using and developing interpretability methods for algorithmic recourse. The concepts are WATSOP: (W)rongness, (A)track, (T)esting for practitioners, (S)keptics, (O)bjectives, (P)roper evaluations for researchers. While oversimplified, these are the core points that lead the field to success or failure. I’ll provide concrete steps for each and related work how you may apply these concepts to your work. |
Been Kim 🔗 |
Sat 11:25 a.m. - 11:30 a.m.
|
Been Kim - Q&A
(
Q&A
)
|
🔗 |
Sat 11:30 a.m. - 11:55 a.m.
|
Elias Bareinboim - Causal Fairness Analysis
(
Keynote
)
SlidesLive Video » In this talk, I will discuss recent progress and ideas on how to perform fairness analysis using causal lenses. |
🔗 |
Sat 11:55 a.m. - 12:00 p.m.
|
Elias Bareinboim - Q&A
(
Q&A
)
|
🔗 |
Sat 12:00 p.m. - 1:00 p.m.
|
Poster Session 2 ( Poster Session ) link » | 🔗 |
Author Information
Stratis Tsirtsis (MPI-SWS)
Stratis Tsirtsis is a Ph.D. candidate at the Max Planck Institute for Software Systems. He is interested in building machine learning systems to inform decisions about individuals who present strategic behavior.
Amir-Hossein Karimi (Max Planck Institute for Intelligent Systems)
Amir-Hossein Karimi is a final-year PhD candidate at ETH Zurich and the Max Planck Institute for Intelligent Systems, working under the guidance of Prof. Dr. Bernhard Schölkopf and Prof. Dr. Isabel Valera. His research interests lie at the intersection of causal inference, explainable AI, and program synthesis. Amir's contributions to the problem of algorithmic recourse have been recognized through spotlight and oral presentations at top venues such as NeurIPS, ICML, AAAI, AISTATS, ACM-FAccT, and ACM-AIES. He has also authored a book chapter and a highly-regarded survey paper in the ACM Computing Surveys. Supported by the NSERC, CLS, and Google PhD fellowships, Amir's research agenda aims to address the need for systems that make use of the best of both human and machine capabilities towards building trustworthy systems for human-machine collaboration. Prior to his PhD, Amir earned several awards including the Spirit of Engineering Science Award (UofToronto, 2015) and the Alumni Gold Medal Award (UWaterloo, 2018) for notable community and academic performance. Alongside his education, Amir gained valuable industry experience at Facebook, Google Brain, and DeepMind, and has provided >$250,000 in AI-consulting services to various startups and incubators. Finally, Amir teaches introductory and advanced topics in AI to an online community @PrinceOfAI.
Ana Lucic (Partnership on AI, University of Amsterdam)
Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.
Manuel Gomez-Rodriguez (MPI-SWS)

Manuel Gomez Rodriguez is a faculty at Max Planck Institute for Software Systems. Manuel develops human-centric machine learning models and algorithms for the analysis, modeling and control of social, information and networked systems. He has received several recognitions for his research, including an outstanding paper award at NeurIPS’13 and a best research paper honorable mention at KDD’10 and WWW’17. He has served as track chair for FAT* 2020 and as area chair for every major conference in machine learning, data mining and the Web. Manuel has co-authored over 50 publications in top-tier conferences (NeurIPS, ICML, WWW, KDD, WSDM, AAAI) and journals (PNAS, Nature Communications, JMLR, PLOS Computational Biology). Manuel holds a BS in Electrical Engineering from Carlos III University, a MS and PhD in Electrical Engineering from Stanford University, and has received postdoctoral training at the Max Planck Institute for Intelligent Systems.
Isabel Valera (Saarland University)
Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.
Hima Lakkaraju (Harvard)
More from the Same Authors
-
2021 : Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations »
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju -
2021 : What will it take to generate fairness-preserving explanations? »
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju -
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards Robust and Reliable Algorithmic Recourse »
Sohini Upadhyay · Shalmali Joshi · Hima Lakkaraju -
2021 : On the Fairness of Causal Algorithmic Recourse »
Julius von Kügelgen · Amir-Hossein Karimi · Umang Bhatt · Isabel Valera · Adrian Weller · Bernhard Schölkopf · Amir-Hossein Karimi -
2021 : Learning to Switch Among Agents in a Team »
Manuel Gomez-Rodriguez · Vahid Balazadeh Meresht -
2021 : Counterfactual Explanations in Sequential Decision Making Under Uncertainty »
Stratis Tsirtsis · Abir De · Manuel Gomez-Rodriguez -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles »
Ana Lucic · Harrie Oosterhuis · Hinda Haned · Maarten de Rijke -
2021 : Towards a Unified Framework for Fair and Stable Graph Representation Learning »
Chirag Agarwal · Hima Lakkaraju · Marinka Zitnik -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Differentiable Learning Under Triage »
Nastaran Okati · Abir De · Manuel Gomez-Rodriguez -
2021 : To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions »
Kim de Bie · Ana Lucic · Hinda Haned -
2021 : CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2023 : Finding Counterfactually Optimal Action Sequences in Continuous State Spaces »
Stratis Tsirtsis · Manuel Gomez-Rodriguez -
2023 : Fair Machine Unlearning: Data Removal while Mitigating Disparities »
Alex Oesterling · Jiaqi Ma · Flavio Calmon · Hima Lakkaraju -
2023 : Evaluating the Casual Reasoning Abilities of Large Language Models »
Isha Puri · Hima Lakkaraju -
2023 : Designing Decision Support Systems Using Counterfactual Prediction Sets »
Eleni Straitouri · Manuel Gomez-Rodriguez -
2023 : Human-Aligned Calibration for AI-Assisted Decision Making »
Nina Corvelo Benz · Manuel Gomez-Rodriguez -
2023 : Himabindu Lakkaraju - Regulating Explainable AI: Technical Challenges and Opportunities »
Hima Lakkaraju -
2023 Workshop: “Could it have been different?” Counterfactuals in Minds and Machines »
Nina Corvelo Benz · Ricardo Dominguez-Olmedo · Manuel Gomez-Rodriguez · Thorsten Joachims · Amir-Hossein Karimi · Stratis Tsirtsis · Isabel Valera · Sarah A Wu -
2023 : Efficient Estimation of Local Robustness of Machine Learning Models »
Tessa Han · Suraj Srinivas · Hima Lakkaraju -
2023 Poster: On Data Manifolds Entailed by Structural Causal Models »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Georgios Arvanitidis · Bernhard Schölkopf -
2023 Poster: Variational Mixture of HyperGenerators for Learning Distributions over Functions »
Batuhan Koyuncu · Pablo Sanchez Martin · Ignacio Peis · Pablo Olmos · Isabel Valera -
2023 Poster: On the Relationship Between Explanation and Prediction: A Causal View »
Amir-Hossein Karimi · Krikamol Muandet · Simon Kornblith · Bernhard Schölkopf · Been Kim -
2023 Poster: Improving Expert Predictions with Conformal Prediction »
Eleni Straitouri · Luke Lequn Wang · Nastaran Okati · Manuel Gomez-Rodriguez -
2023 Poster: On the Within-Group Fairness of Screening Classifiers »
Nastaran Okati · Stratis Tsirtsis · Manuel Gomez-Rodriguez -
2023 Tutorial: Responsible AI for Generative AI in Practice: Lessons Learned and Open Challenges »
Krishnaram Kenthapadi · Hima Lakkaraju · Nazneen Rajani -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Improving Screening Processes via Calibrated Subset Selection »
Luke Lequn Wang · Thorsten Joachims · Manuel Gomez-Rodriguez -
2022 Spotlight: Improving Screening Processes via Calibrated Subset Selection »
Luke Lequn Wang · Thorsten Joachims · Manuel Gomez-Rodriguez -
2022 Poster: Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization »
Adrián Javaloy · Maryam Meghdadi · Isabel Valera -
2022 Poster: On the Adversarial Robustness of Causal Algorithmic Recourse »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Bernhard Schölkopf -
2022 Spotlight: Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization »
Adrián Javaloy · Maryam Meghdadi · Isabel Valera -
2022 Spotlight: On the Adversarial Robustness of Causal Algorithmic Recourse »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Bernhard Schölkopf -
2021 : Poster »
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan -
2021 : Differentiable learning Under Algorithmic Triage »
Manuel Gomez-Rodriguez -
2021 : Towards Robust and Reliable Model Explanations for Healthcare »
Hima Lakkaraju -
2021 Poster: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 Spotlight: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2020 Poster: Robust and Stable Black Box Explanations »
Hima Lakkaraju · Nino Arsov · Osbert Bastani -
2018 Tutorial: Learning with Temporal Point Processes »
Manuel Gomez-Rodriguez · Isabel Valera