Workshop
Responsible Decision Making in Dynamic Environments
Virginie Do 路 Thorsten Joachims 路 Alessandro Lazaric 路 Joelle Pineau 路 Matteo Pirotta 路 Harsh Satija 路 Nicolas Usunier
Ballroom 1
Sat 23 Jul, 6 a.m. PDT
Algorithmic decision-making systems are increasingly used in sensitive applications such as advertising, resume reviewing, employment, credit lending, policing, criminal justice, and beyond. The long-term promise of these approaches is to automate, augment and/or eventually improve on the human decisions which can be biased or unfair, by leveraging the potential of machine learning to make decisions supported by historical data. Unfortunately, there is a growing body of evidence showing that the current machine learning technology is vulnerable to privacy or security attacks, lacks interpretability, or reproduces (and even exacerbates) historical biases or discriminatory behaviors against certain social groups.
Most of the literature on building socially responsible algorithmic decision-making systems focus on a static scenario where algorithmic decisions do not change the data distribution. However, real-world applications involve nonstationarities and feedback loops that must be taken into account to measure and mitigate fairness in the long-term. These feedback loops involve the learning process which may be biased because of insufficient exploration, or changes in the environment's dynamics due to strategic responses of the various stakeholders. From a machine learning perspective, these sequential processes are primarily studied through counterfactual analysis and reinforcement learning.
The purpose of this workshop is to bring together researchers from both industry and academia working on the full spectrum of responsible decision-making in dynamic environments, from theory to practice. In particular, we encourage submissions on the following topics: fairness, privacy and security, robustness, conservative and safe algorithms, explainability and interpretability.
Schedule
Sat 6:00 a.m. - 2:30 p.m.
|
Please visit the workshop website for the full program ( Program ) > link | 馃敆 |
Sat 6:00 a.m. - 6:10 a.m.
|
Introduction and opening remarks
(
Intro
)
>
SlidesLive Video |
馃敆 |
Sat 6:10 a.m. - 6:40 a.m.
|
Responsible Decision-Making in Batch RL Settings
(
Keynote
)
>
SlidesLive Video |
Finale Doshi-Velez 馃敆 |
Sat 6:40 a.m. - 8:00 a.m.
|
Poster session (in-person only, with coffee break)
(
Poster session
)
>
|
馃敆 |
Sat 8:00 a.m. - 8:30 a.m.
|
Robust Multivalid Uncertainty Quantification
(
Keynote
)
>
SlidesLive Video |
Aaron Roth 馃敆 |
Sat 8:30 a.m. - 8:45 a.m.
|
Individually Fair Learning with One-Sided Feedback
(
Contributed talk
)
>
SlidesLive Video |
Yahav Bechavod 馃敆 |
Sat 8:45 a.m. - 9:00 a.m.
|
Reward Reports for Reinforcement Learning
(
Contributed talk
)
>
SlidesLive Video |
Nathan Lambert 馃敆 |
Sat 11:00 a.m. - 11:30 a.m.
|
Dimension Reduction Tools and Their Use in Responsible Data Understanding in Dynamic Environments
(
Keynote
)
>
SlidesLive Video |
Cynthia Rudin 馃敆 |
Sat 11:30 a.m. - 12:00 p.m.
|
Explanations in Whose Interests?
(
Keynote
)
>
SlidesLive Video |
馃敆 |
Sat 12:00 p.m. - 1:00 p.m.
|
Poster session (in-person only, with coffee break)
(
Poster session
)
>
|
馃敆 |
Sat 1:00 p.m. - 1:30 p.m.
|
Exposure-Aware Recommendation using Contextual Bandits
(
Keynote
)
>
SlidesLive Video |
馃敆 |
Sat 1:30 p.m. - 2:00 p.m.
|
Modeling Recommender Ecosystems - Some Considerations
(
Keynote
)
>
SlidesLive Video |
Craig Boutilier 馃敆 |
Sat 2:00 p.m. - 2:15 p.m.
|
Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits
(
Contributed talk
)
>
SlidesLive Video |
Yulian Wu 馃敆 |
Sat 2:15 p.m. - 2:30 p.m.
|
A Game-Theoretic Perspective on Trust in Recommendation
(
Contributed talk
)
>
SlidesLive Video |
Sarah Cen 馃敆 |
-
|
Combining Counterfactuals With Shapley Values To Explain Image Models
(
Poster
)
>
|
Aditya Lahiri 路 Kamran Alipour 路 Ehsan Adeli 路 Babak Salimi 馃敆 |
-
|
Perspectives on Incorporating Expert Feedback into Model Updates
(
Poster
)
>
|
Valerie Chen 路 Umang Bhatt 路 Hoda Heidari 路 Adrian Weller 路 Ameet Talwalkar 馃敆 |
-
|
Individually Fair Learning with One-Sided Feedback
(
Poster
)
>
|
Yahav Bechavod 路 Aaron Roth 馃敆 |
-
|
Robust Reinforcement Learning with Distributional Risk-averse formulation
(
Poster
)
>
|
Pierre Clavier 路 Stephanie Allassonniere 路 Erwann LE PENNEC 馃敆 |
-
|
Optimal Dynamic Regret in LQR Control
(
Poster
)
>
|
Dheeraj Baby 路 Yu-Xiang Wang 馃敆 |
-
|
Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits
(
Poster
)
>
|
Yulian Wu 路 Youming Tao 路 Peng Zhao 路 Di Wang 馃敆 |
-
|
RISE: Robust Individualized Decision Learning with Sensitive Variables
(
Poster
)
>
|
Xiaoqing (Ellen) Tan 路 Zhengling Qi 路 Christopher Seymour 路 Lu Tang 馃敆 |
-
|
Adversarial Cheap Talk
(
Poster
)
>
|
Christopher Lu 路 Timon Willi 路 Alistair Letcher 路 Jakob Foerster 馃敆 |
-
|
Acting Optimistically in Choosing Safe Actions
(
Poster
)
>
|
Tianrui Chen 路 Aditya Gangrade 路 Venkatesh Saligrama 馃敆 |
-
|
Dynamic Positive Reinforcement For Long-Term Fairness
(
Poster
)
>
|
Bhagyashree Puranik 路 Upamanyu Madhow 路 Ramtin Pedarsani 馃敆 |
-
|
An Investigation into the Open World Survival Game Crafter
(
Poster
)
>
|
Aleksandar Stanic 路 Yujin Tang 路 David Ha 路 J眉rgen Schmidhuber 馃敆 |
-
|
Equity and Equality in Fair Federated Learning
(
Poster
)
>
|
Hamid Mozaffari 路 Amir Houmansadr 馃敆 |
-
|
Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication
(
Poster
)
>
|
Yanchao Sun 路 Ruijie Zheng 路 Parisa Hassanzadeh 路 Yongyuan Liang 路 Soheil Feizi 路 Sumitra Ganesh 路 Furong Huang 馃敆 |
-
|
Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction
(
Poster
)
>
|
Jos茅 Maria Pombal 路 Pedro Saleiro 路 Mario Figueiredo 路 Pedro Bizarro 馃敆 |
-
|
A Decision Metric for the Use of a Deep Reinforcement Learning Policy
(
Poster
)
>
|
Christina Selby 路 Edward Staley 馃敆 |
-
|
Safe and Robust Experience Sharing for Deterministic Policy Gradient Algorithms
(
Poster
)
>
|
Baturay Sa臒lam 路 Dogan Can Cicek 路 Furkan Burak Mutlu 路 Suleyman Kozat 馃敆 |
-
|
Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting
(
Poster
)
>
|
Christine Herlihy 路 Aviva Prins 路 Aravind Srinivasan 路 John P Dickerson 馃敆 |
-
|
Exposing Algorithmic Bias through Inverse Design
(
Poster
)
>
|
Carmen Mazijn 路 Carina Prunkl 路 Andres Algaba 路 Jan Danckaert 路 Vincent Ginis 馃敆 |
-
|
Reward Reports for Reinforcement Learning
(
Poster
)
>
|
Thomas Krendl Gilbert 路 Sarah Dean 路 Nathan Lambert 路 Tom Zick 路 Aaron Snoswell 馃敆 |
-
|
Rashomon Capacity: Measuring Predictive Multiplicity in Probabilistic Classification
(
Poster
)
>
|
Hsiang Hsu 路 Flavio Calmon 馃敆 |
-
|
Counterfactual Metrics for Auditing Black-Box Recommender Systems for Ethical Concerns
(
Poster
)
>
|
Nil-Jana Akpinar 路 Liu Leqi 路 Dylan Hadfield-Menell 路 Zachary Lipton 馃敆 |
-
|
Adaptive Data Debiasing Through Bounded Exploration
(
Poster
)
>
|
Yifan Yang 路 Yang Liu 路 Parinaz Naghizadeh 馃敆 |
-
|
Fairness Over Utilities Via Multi-Objective Rewards
(
Poster
)
>
|
Jack Blandin 路 Ian Kash 馃敆 |
-
|
Defining and Characterizing Reward Gaming
(
Poster
)
>
|
Joar Skalse 路 Nikolaus Howe 路 Dmitrii Krasheninnikov 路 David Krueger 馃敆 |
-
|
End-to-end Auditing of Decision Pipelines
(
Poster
)
>
|
Benjamin Laufer 路 Emma Pierson 路 Nikhil Garg 馃敆 |
-
|
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
(
Poster
)
>
|
Yongyuan Liang 路 Yanchao Sun 路 Ruijie Zheng 路 Furong Huang 馃敆 |
-
|
Engineering a Safer Recommender System
(
Poster
)
>
|
Liu Leqi 路 Sarah Dean 馃敆 |
-
|
RiskyZoo: A Library for Risk-Sensitive Supervised Learning
(
Poster
)
>
|
William Wong 路 Audrey Huang 路 Liu Leqi 路 Kamyar Azizzadenesheli 路 Zachary Lipton 馃敆 |
-
|
Open Problems in (Un)fairness of the Retail Food Safety Inspection Process
(
Poster
)
>
|
Tanya Berger-Wolf 路 Allison Howell 路 Chris Kanich 路 Ian Kash 路 Barbara Kowalcyk 路 Gina Nicholson Kramer 路 Andrew Perrault 路 Shubham Singh 馃敆 |
-
|
From Soft Trees to Hard Trees: Gains and Losses
(
Poster
)
>
|
Xin Zeng 路 Jiayu Yao 路 Finale Doshi-Velez 路 Weiwei Pan 馃敆 |
-
|
Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry
(
Poster
)
>
|
Mark Penrod 路 Harrison Termotto 路 Varshini Reddy 路 Jiayu Yao 路 Finale Doshi-Velez 路 Weiwei Pan 馃敆 |
-
|
Long Term Fairness for Minority Groups via Performative Distributionally Robust Optimization
(
Poster
)
>
|
Liam Peet-Pare 路 Alona Fyshe 路 Nidhi Hegde 馃敆 |
-
|
A Game-Theoretic Perspective on Trust in Recommendation
(
Poster
)
>
|
Sarah Cen 路 Andrew Ilyas 路 Aleksander Madry 馃敆 |
-
|
Optimizing Personalized Assortment Decisions in the Presence of Platform Disengagement
(
Poster
)
>
|
Mika Sumida 路 Angela Zhou 馃敆 |
-
|
Machine Learning Explainability & Fairness: Insights from Consumer Lending
(
Poster
)
>
|
Sormeh Yazdi 路 Laura Blattner 路 Duncan McElfresh 路 P-R Stark 路 Jann Spiess 路 Georgy Kalashnov 馃敆 |
-
|
Policy Fairness in Sequential Allocations under Bias Dynamics
(
Poster
)
>
|
Meirav Segal 路 Anne-Marie George 路 Christos Dimitrakakis 馃敆 |
-
|
A law of adversarial risk, interpolation, and label noise
(
Poster
)
>
|
Daniel Paleka 路 Amartya Sanyal 馃敆 |
-
|
LPI: Learned Positional Invariances for Transfer of Task Structure and Zero-shot Planning
(
Poster
)
>
|
Tamas Madarasz 馃敆 |
-
|
The Backfire Effects of Fairness Constraints
(
Poster
)
>
|
Yi Sun 路 Alfredo Cuesta Infante 路 Kalyan Veeramachaneni 馃敆 |
-
|
Beyond Adult and COMPAS: Fairness in Multi-Class Prediction
(
Poster
)
>
|
Wael Alghamdi 路 Hsiang Hsu 路 Haewon Jeong 路 Hao Wang 路 Peter Winston Michalak 路 Shahab Asoodeh 路 Flavio Calmon 馃敆 |