Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.
Sat 4:45 a.m. - 5:00 a.m.
|
Welcome and introduction
(
Remarks
)
SlidesLive Video » |
🔗 |
Sat 5:00 a.m. - 5:25 a.m.
|
Sandra Wachter - How AI weakens legal recourse and remedies
(
Keynote
)
SlidesLive Video » AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable. To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task? In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe. I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world. |
🔗 |
Sat 5:25 a.m. - 5:30 a.m.
|
Sandra Wachter - Q&A
(
Q&A
)
|
🔗 |
Sat 5:30 a.m. - 5:55 a.m.
|
Berk Ustun - On Predictions without Recourse
(
Keynote
)
SlidesLive Video » One of the most significant findings that we can produce when evaluating recourse in machine learning is that a model has assigned a "prediction without recourse." Predictions without recourse arise when the optimization problem that we solve to search for recourse actions is infeasible. In practice, the "infeasibility" of this problem shows that a person cannot change their prediction through their actions - i.e., that the model has fixed their prediction based on input variables beyond their control. In this talk, I will discuss these issues and discuss how we can address them by studying the "feasibility" of recourse. First, I will present reasons for why we should ensure the feasibility of recourse -- i.e., even in settings where we may not wish to provide recourse. Next, I will discuss technical challenges that we must overcome to ensure recourse reliably. |
🔗 |
Sat 5:55 a.m. - 6:00 a.m.
|
Berk Ustun - Q&A
(
Q&A
)
|
🔗 |
Sat 6:00 a.m. - 6:10 a.m.
|
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
(
Contributed talk
)
SlidesLive Video » In social domains, Machine Learning algorithms often prompt individuals to strategically modify their observable attributes to receive more favorable predictions. As a result, the distribution the predictive model is trained on may differ from the one it operates on in deployment. While such distribution shifts, in general, hinder accurate predictions, our work identifies a unique opportunity associated with shifts due to strategic responses: We show that we can use strategic responses effectively to recover causal relationships between the observable features and outcomes we wish to predict. More specifically, we study a game-theoretic model in which a principal deploys a sequence of models to predict an outcome of interest (e.g., college GPA) for a sequence of strategic agents (e.g., college applicants). In response, strategic agents invest efforts and modify their features for better predictions. In such settings, unobserved confounding variables (e.g., family educational background) can influence both an agent's observable features (e.g., high school records) and outcomes (e.g., college GPA). Therefore, standard regression methods (such as OLS) generally produce biased estimators. In order to address this issue, our work establishes a novel connection between strategic responses to machine learning models and instrumental variable (IV) regression, by observing that the sequence of deployed models can be viewed as an instrument that affects agents' observable features but does not directly influence their outcomes. Therefore, two-stage least squares (2SLS) regression can recover the causal relationships between observable features and outcomes. |
🔗 |
Sat 6:10 a.m. - 6:20 a.m.
|
Feature Attribution and Recourse via Probabilistic Contrastive Counterfactuals
(
Contributed talk
)
SlidesLive Video » There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work has focused on two main approaches: (1) |
🔗 |
Sat 6:20 a.m. - 6:30 a.m.
|
Linear Classifiers that Encourage Constructive Adaptation
(
Contributed talk
)
SlidesLive Video » Machine learning systems are often used in settings where individuals adapt their features to obtain a desired outcome. In such settings, strategic behavior leads to a sharp loss in model performance in deployment. |
🔗 |
Sat 6:30 a.m. - 6:40 a.m.
|
On the Fairness of Causal Algorithmic Recourse
(
Contributed talk
)
SlidesLive Video » Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fairness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria |
🔗 |
Sat 6:40 a.m. - 6:45 a.m.
|
Q&A for Contributed Talks: Part 1
(
Q&A
)
SlidesLive Video » |
🔗 |
Sat 6:45 a.m. - 6:55 a.m.
|
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
(
Contributed talk
)
SlidesLive Video » Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open– |
🔗 |
Sat 6:55 a.m. - 7:05 a.m.
|
CounterNet: End-to-End Training of Counterfactual Aware Predictions
(
Contributed talk
)
SlidesLive Video » This work presents CounterNet, a novel end-to-end learning framework which integrates the predictive model training and the counterfactual (CF) explanation into a single end-to-end pipeline. Prior CF explanation techniques rely on solving separate time-intensive optimization problems to find CF examples for every single input instance, and also suffer from the misalignment of objectives between model predictions and explanations, which leads to significant shortcomings in the quality of CF explanations. CounterNet, on the other hand, integrates both prediction and explanation in the same framework, which enable the optimization of the CF example generation only once together with the predictive model. We propose a novel variant of back-propagation which can help in effectively training CounterNet's network. Finally, we conduct extensive experiments on multiple real-world datasets. Our results show that CounterNet generates high-quality predictions, and corresponding CF examples (with high validity) for any new input instance significantly faster than existing state-of-the-art baselines. |
🔗 |
Sat 7:05 a.m. - 7:15 a.m.
|
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
(
Contributed talk
)
SlidesLive Video » Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. |
🔗 |
Sat 7:15 a.m. - 7:25 a.m.
|
Towards Robust and Reliable Algorithmic Recourse
(
Contributed talk
)
SlidesLive Video » As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (e.g., dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, Robust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out detailed theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) we derive a lower bound on the probability of invalidation of recourses generated by existing approaches which are not robust to model shifts. 2) we prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation demonstrates the efficacy of the proposed framework and supports our theoretical findings. |
🔗 |
Sat 7:25 a.m. - 7:30 a.m.
|
Q&A for Contributed Talks: Part 2
(
Q&A
)
SlidesLive Video » |
🔗 |
Sat 7:30 a.m. - 8:30 a.m.
|
Poster Session 1 ( Poster Session ) link » | 🔗 |
Sat 8:30 a.m. - 9:30 a.m.
|
Solon Barocas, Ruth Byrne, Amit Dhurandhar and Alice Xiang - From counterfactual reasoning to re-applying for a loan: How do we connect the dots?
(
Panel Discussion
)
SlidesLive Video » |
🔗 |
Sat 9:30 a.m. - 10:30 a.m.
|
Break
|
🔗 |
Sat 10:30 a.m. - 10:55 a.m.
|
Tobias Gerstenberg - Going beyond the here and now: Counterfactual simulation in human cognition
(
Keynote
)
SlidesLive Video » As humans, we spend much of our time going beyond the here and now. We dwell on the past, long for the future, and ponder how things could have turned out differently. In this talk, I will argue that people's knowledge of the world is organized around causally structured mental models, and that much of human thought can be understood as cognitive operations over these mental models. Specifically, I will highlight the pervasiveness of counterfactual thinking in human cognition. Counterfactuals are critical for how people make causal judgments, how they explain what happened, and how they hold others responsible for their actions. Based on these empirical insights, I will share some thoughts on the relationship between counterfactual thought and algorithmic recourse. |
Tobias Gerstenberg 🔗 |
Sat 10:55 a.m. - 11:00 a.m.
|
Tobias Gerstenberg - Q&A
(
Q&A
)
|
🔗 |
Sat 11:00 a.m. - 11:25 a.m.
|
Been Kim - Decision makers, practitioners and researchers, we need to talk.
(
Keynote
)
SlidesLive Video » This talk presents oversimplified but practical concepts that practitioners and researchers must know in using and developing interpretability methods for algorithmic recourse. The concepts are WATSOP: (W)rongness, (A)track, (T)esting for practitioners, (S)keptics, (O)bjectives, (P)roper evaluations for researchers. While oversimplified, these are the core points that lead the field to success or failure. I’ll provide concrete steps for each and related work how you may apply these concepts to your work. |
Been Kim 🔗 |
Sat 11:25 a.m. - 11:30 a.m.
|
Been Kim - Q&A
(
Q&A
)
|
🔗 |
Sat 11:30 a.m. - 11:55 a.m.
|
Elias Bareinboim - Causal Fairness Analysis
(
Keynote
)
SlidesLive Video » In this talk, I will discuss recent progress and ideas on how to perform fairness analysis using causal lenses. |
🔗 |
Sat 11:55 a.m. - 12:00 p.m.
|
Elias Bareinboim - Q&A
(
Q&A
)
|
🔗 |
Sat 12:00 p.m. - 1:00 p.m.
|
Poster Session 2 ( Poster Session ) link » | 🔗 |