Timezone: »
We clarify what fairness guarantees we can and cannot expect to follow from unconstrained machine learning. Specifically, we show that in many settings, unconstrained learning on its own implies group calibration, that is, the outcome variable is conditionally independent of group membership given the score. A lower bound confirms the optimality of our upper bound. Moreover, we prove that as the excess risk of the learned score decreases, the more strongly it violates separation and independence, two other standard fairness criteria. Our results challenge the view that group calibration necessitates an active intervention, suggesting that often we ought to think of it as a byproduct of unconstrained machine learning.
Author Information
Lydia T. Liu (University of California Berkeley)
Max Simchowitz (UC Berkeley)
Moritz Hardt (University of California, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: The Implicit Fairness Criterion of Unconstrained Learning »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #205
More from the Same Authors
-
2021 : Causal Inference Struggles with Agency on Online Platforms »
Smitha Milli · Luca Belli · Moritz Hardt -
2021 Poster: Task-Optimal Exploration in Linear Dynamical Systems »
Andrew Wagenmaker · Max Simchowitz · Kevin Jamieson -
2021 Poster: Alternative Microfoundations for Strategic Classification »
Meena Jagadeesan · Celestine Mendler-Dünner · Moritz Hardt -
2021 Oral: Task-Optimal Exploration in Linear Dynamical Systems »
Andrew Wagenmaker · Max Simchowitz · Kevin Jamieson -
2021 Spotlight: Alternative Microfoundations for Strategic Classification »
Meena Jagadeesan · Celestine Mendler-Dünner · Moritz Hardt -
2020 Workshop: Theoretical Foundations of Reinforcement Learning »
Emma Brunskill · Thodoris Lykouris · Max Simchowitz · Wen Sun · Mengdi Wang -
2020 Poster: Naive Exploration is Optimal for Online LQR »
Max Simchowitz · Dylan Foster -
2020 Poster: Performative Prediction »
Juan Perdomo · Tijana Zrnic · Celestine Mendler-Dünner · Moritz Hardt -
2020 Poster: Reward-Free Exploration for Reinforcement Learning »
Chi Jin · Akshay Krishnamurthy · Max Simchowitz · Tiancheng Yu -
2020 Poster: Strategic Classification is Causal Modeling in Disguise »
John Miller · Smitha Milli · Moritz Hardt -
2020 Poster: Test-Time Training with Self-Supervision for Generalization under Distribution Shifts »
Yu Sun · Xiaolong Wang · Zhuang Liu · John Miller · Alexei Efros · Moritz Hardt -
2020 Poster: Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning »
Esther Rolf · Max Simchowitz · Sarah Dean · Lydia T. Liu · Daniel Bjorkegren · Moritz Hardt · Joshua Blumenstock -
2020 Poster: Logarithmic Regret for Adversarial Online Control »
Dylan Foster · Max Simchowitz -
2019 Poster: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt -
2019 Oral: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt -
2018 Poster: Delayed Impact of Fair Machine Learning »
Lydia T. Liu · Sarah Dean · Esther Rolf · Max Simchowitz · Moritz Hardt -
2018 Oral: Delayed Impact of Fair Machine Learning »
Lydia T. Liu · Sarah Dean · Esther Rolf · Max Simchowitz · Moritz Hardt