Timezone: »
Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
Author Information
Lydia T. Liu (University of California Berkeley)
Sarah Dean (UC Berkeley)
Esther Rolf (UC Berkeley)
Max Simchowitz (UC Berkeley)
University of California Moritz Hardt (University of California, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Delayed Impact of Fair Machine Learning »
Wed Jul 11th 03:00 -- 03:20 PM Room A6
More from the Same Authors
-
2020 Workshop: Theoretical Foundations of Reinforcement Learning »
Emma Brunskill · Thodoris Lykouris · Max Simchowitz · Wen Sun · Mengdi Wang -
2020 Poster: Naive Exploration is Optimal for Online LQR »
Max Simchowitz · Dylan Foster -
2020 Poster: Performative Prediction »
Juan Perdomo · Tijana Zrnic · Celestine Mendler-Dünner · University of California Moritz Hardt -
2020 Poster: Reward-Free Exploration for Reinforcement Learning »
Chi Jin · Akshay Krishnamurthy · Max Simchowitz · Tiancheng Yu -
2020 Poster: Strategic Classification is Causal Modeling in Disguise »
John Miller · Smitha Milli · University of California Moritz Hardt -
2020 Poster: Test-Time Training with Self-Supervision for Generalization under Distribution Shifts »
Yu Sun · Xiaolong Wang · Zhuang Liu · John Miller · Alexei Efros · University of California Moritz Hardt -
2020 Poster: Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning »
Esther Rolf · Max Simchowitz · Sarah Dean · Lydia T. Liu · Daniel Bjorkegren · University of California Moritz Hardt · Joshua Blumenstock -
2020 Poster: Logarithmic Regret for Adversarial Online Control »
Dylan Foster · Max Simchowitz -
2019 Poster: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · University of California Moritz Hardt -
2019 Poster: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · University of California Moritz Hardt -
2019 Oral: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · University of California Moritz Hardt -
2019 Oral: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · University of California Moritz Hardt