Timezone: »
Recent work in fairness in machine learning has proposed adjusting for fairness by equalizing accuracy metrics across groups and has also studied how datasets affected by historical prejudices may lead to unfair decision policies. We connect these lines of work and study the residual unfairness that arises when a fairness-adjusted predictor is not actually fair on the target population due to systematic censoring of training data by existing biased policies. This scenario is particularly common in the same applications where fairness is a concern. We characterize theoretically the impact of such censoring on standard fairness metrics for binary classifiers and provide criteria for when residual unfairness may or may not appear. We prove that, under certain conditions, fairness-adjusted classifiers will in fact induce residual unfairness that perpetuates the same injustices, against the same groups, that biased the data to begin with, thus showing that even state-of-the-art fair machine learning can have a "bias in, bias out" property. When certain benchmark data is available, we show how sample reweighting can estimate and adjust fairness metrics while accounting for censoring. We use this to study the case of Stop, Question, and Frisk (SQF) and demonstrate that attempting to adjust for fairness perpetuates the same injustices that the policy is infamous for.
Author Information
Nathan Kallus (Cornell University)
Angela Zhou (Cornell University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Residual Unfairness in Fair Machine Learning from Prejudiced Data »
Thu. Jul 12th 09:20 -- 09:30 AM Room A6
More from the Same Authors
-
2020 : Poster #12 »
Angela Zhou -
2022 : Optimizing Personalized Assortment Decisions in the Presence of Platform Disengagement »
Mika Sumida · Angela Zhou -
2023 Poster: B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under Hidden Confounding »
Jacob Dorn · Marah Ghoummaid · Andrew Jesson · Nathan Kallus · Miruna Oprescu · Uri Shalit -
2023 Poster: Smooth Non-stationary Bandits »
Su Jia · Qian Xie · Nathan Kallus · Peter I Frazier -
2023 Poster: Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR »
Kaiwen Wang · Nathan Kallus · Wen Sun -
2023 Poster: Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings »
Masatoshi Uehara · Ayush Sekhari · Jason Lee · Nathan Kallus · Wen Sun -
2022 Workshop: Spurious correlations, Invariance, and Stability (SCIS) »
Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria -
2022 Poster: Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning »
Nathan Kallus · Xiaojie Mao · Kaiwen Wang · Zhengyuan Zhou -
2022 Poster: Learning Bellman Complete Representations for Offline Policy Evaluation »
Jonathan Chang · Kaiwen Wang · Nathan Kallus · Wen Sun -
2022 Spotlight: Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning »
Nathan Kallus · Xiaojie Mao · Kaiwen Wang · Zhengyuan Zhou -
2022 Oral: Learning Bellman Complete Representations for Offline Policy Evaluation »
Jonathan Chang · Kaiwen Wang · Nathan Kallus · Wen Sun -
2021 Poster: Optimal Off-Policy Evaluation from Multiple Logging Policies »
Nathan Kallus · Yuta Saito · Masatoshi Uehara -
2021 Spotlight: Optimal Off-Policy Evaluation from Multiple Logging Policies »
Nathan Kallus · Yuta Saito · Masatoshi Uehara -
2020 Workshop: Participatory Approaches to Machine Learning »
Angela Zhou · David Madras · Deborah Raji · Smitha Milli · Bogdan Kulynych · Richard Zemel -
2020 : Opening remarks »
Deborah Raji · Angela Zhou · David Madras · Smitha Milli · Bogdan Kulynych -
2020 Poster: Statistically Efficient Off-Policy Policy Gradients »
Nathan Kallus · Masatoshi Uehara -
2020 Poster: DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training »
Nathan Kallus -
2020 Poster: Efficient Policy Learning from Surrogate-Loss Classification Reductions »
Andrew Bennett · Nathan Kallus -
2020 Poster: Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation »
Nathan Kallus · Masatoshi Uehara -
2019 Poster: Classifying Treatment Responders Under Causal Effect Monotonicity »
Nathan Kallus -
2019 Oral: Classifying Treatment Responders Under Causal Effect Monotonicity »
Nathan Kallus -
2017 Poster: Recursive Partitioning for Personalization using Observational Data »
Nathan Kallus -
2017 Talk: Recursive Partitioning for Personalization using Observational Data »
Nathan Kallus