Timezone: »

 
Workshop
Responsible Decision Making in Dynamic Environments
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier

Sat Jul 23 05:00 AM -- 05:00 AM (PDT) @ Hall G
Event URL: https://responsibledecisionmaking.github.io/ »

Algorithmic decision-making systems are increasingly used in sensitive applications such as advertising, resume reviewing, employment, credit lending, policing, criminal justice, and beyond. The long-term promise of these approaches is to automate, augment and/or eventually improve on the human decisions which can be biased or unfair, by leveraging the potential of machine learning to make decisions supported by historical data. Unfortunately, there is a growing body of evidence showing that the current machine learning technology is vulnerable to privacy or security attacks, lacks interpretability, or reproduces (and even exacerbates) historical biases or discriminatory behaviors against certain social groups.

Most of the literature on building socially responsible algorithmic decision-making systems focus on a static scenario where algorithmic decisions do not change the data distribution. However, real-world applications involve nonstationarities and feedback loops that must be taken into account to measure and mitigate fairness in the long-term. These feedback loops involve the learning process which may be biased because of insufficient exploration, or changes in the environment's dynamics due to strategic responses of the various stakeholders. From a machine learning perspective, these sequential processes are primarily studied through counterfactual analysis and reinforcement learning.

The purpose of this workshop is to bring together researchers from both industry and academia working on the full spectrum of responsible decision-making in dynamic environments, from theory to practice. In particular, we encourage submissions on the following topics: fairness, privacy and security, robustness, conservative and safe algorithms, explainability and interpretability.

Author Information

Virginie Do (Université Paris Dauphine-PSL, Facebook AI Research)
Thorsten Joachims (Cornell University)
Alessandro Lazaric (Facebook AI Research)
Joelle Pineau (Facebook)
Matteo Pirotta (Facebook AI Research)
Harsh Satija (McGill University)
Nicolas Usunier (Facebook AI Research)

More from the Same Authors