Timezone: »
Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
Author Information
Lydia T. Liu (University of California Berkeley)
Sarah Dean (UC Berkeley)
Esther Rolf (UC Berkeley)
Max Simchowitz (UC Berkeley)
Moritz Hardt (University of California, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Delayed Impact of Fair Machine Learning »
Wed. Jul 11th 03:00 -- 03:20 PM Room A6
More from the Same Authors
-
2021 : Causal Inference Struggles with Agency on Online Platforms »
Smitha Milli · Luca Belli · Moritz Hardt -
2021 Poster: Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability »
Mihaela Curmei · Sarah Dean · Benjamin Recht -
2021 Spotlight: Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability »
Mihaela Curmei · Sarah Dean · Benjamin Recht -
2021 Poster: Task-Optimal Exploration in Linear Dynamical Systems »
Andrew Wagenmaker · Max Simchowitz · Kevin Jamieson -
2021 Poster: Alternative Microfoundations for Strategic Classification »
Meena Jagadeesan · Celestine Mendler-Dünner · Moritz Hardt -
2021 Oral: Task-Optimal Exploration in Linear Dynamical Systems »
Andrew Wagenmaker · Max Simchowitz · Kevin Jamieson -
2021 Spotlight: Alternative Microfoundations for Strategic Classification »
Meena Jagadeesan · Celestine Mendler-Dünner · Moritz Hardt -
2020 Workshop: Theoretical Foundations of Reinforcement Learning »
Emma Brunskill · Thodoris Lykouris · Max Simchowitz · Wen Sun · Mengdi Wang -
2020 Poster: Naive Exploration is Optimal for Online LQR »
Max Simchowitz · Dylan Foster -
2020 Poster: Performative Prediction »
Juan Perdomo · Tijana Zrnic · Celestine Mendler-Dünner · Moritz Hardt -
2020 Poster: Reward-Free Exploration for Reinforcement Learning »
Chi Jin · Akshay Krishnamurthy · Max Simchowitz · Tiancheng Yu -
2020 Poster: Strategic Classification is Causal Modeling in Disguise »
John Miller · Smitha Milli · Moritz Hardt -
2020 Poster: Test-Time Training with Self-Supervision for Generalization under Distribution Shifts »
Yu Sun · Xiaolong Wang · Zhuang Liu · John Miller · Alexei Efros · Moritz Hardt -
2020 Poster: Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning »
Esther Rolf · Max Simchowitz · Sarah Dean · Lydia T. Liu · Daniel Bjorkegren · Moritz Hardt · Joshua Blumenstock -
2020 Poster: Logarithmic Regret for Adversarial Online Control »
Dylan Foster · Max Simchowitz -
2019 Poster: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt -
2019 Poster: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · Moritz Hardt -
2019 Oral: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · Moritz Hardt -
2019 Oral: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt