Timezone: »

 
Workshop
ICML Workshop on Algorithmic Recourse
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju

Sat Jul 24 04:45 AM -- 01:15 PM (PDT) @
Event URL: https://sites.google.com/view/recourse21 »

Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.

Author Information

Stratis Tsirtsis (MPI-SWS)

Stratis Tsirtsis is a Ph.D. candidate at the Max Planck Institute for Software Systems. He is interested in building machine learning systems to inform decisions about individuals who present strategic behavior.

Amir-Hossein Karimi (Max Planck Institute for Intelligent Systems)

Amir-Hossein is a PhD student at the Max Planck ETH Center for Learning Systems, supervised by Profs. Schölkopf, Valera, and Hofmann. His work focuses on the intersection of causal and explainable machine learning, primarily on the problem of algorithmic recourse, that is, how to help individuals subject to automated algorithmic systems overcome unfavorable predictions.

Ana Lucic (Partnership on AI, University of Amsterdam)

Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.

Manuel Gomez-Rodriguez (MPI-SWS)
Manuel Gomez-Rodriguez

Manuel Gomez Rodriguez is a faculty at Max Planck Institute for Software Systems. Manuel develops human-centric machine learning models and algorithms for the analysis, modeling and control of social, information and networked systems. He has received several recognitions for his research, including an outstanding paper award at NeurIPS’13 and a best research paper honorable mention at KDD’10 and WWW’17. He has served as track chair for FAT* 2020 and as area chair for every major conference in machine learning, data mining and the Web. Manuel has co-authored over 50 publications in top-tier conferences (NeurIPS, ICML, WWW, KDD, WSDM, AAAI) and journals (PNAS, Nature Communications, JMLR, PLOS Computational Biology). Manuel holds a BS in Electrical Engineering from Carlos III University, a MS and PhD in Electrical Engineering from Stanford University, and has received postdoctoral training at the Max Planck Institute for Intelligent Systems.

Isabel Valera (Saarland University)

Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.

Hima Lakkaraju (Harvard)

More from the Same Authors