Timezone: »
In domains such as medicine, it can be acceptable for machine learning models to include {\em sensitive attributes} such as gender and ethnicity. In this work, we argue that when there is this kind of treatment disparity, then it should be in the best interest of each group. Drawing on ethical principles such as beneficence ("do the best") and non-maleficence ("do no harm"), we show how to use sensitive attributes to train decoupled classifiers that satisfy preference guarantees. These guarantees ensure the majority of individuals in each group prefer their assigned classifier to (i) a pooled model that ignores group membership (rationality), and (ii) the model assigned to any other group (envy-freeness). We introduce a recursive procedure that adaptively selects group attributes for decoupling, and present formal conditions to ensure preference guarantees in terms of generalization error. We validate the effectiveness of the procedure on real-world datasets, showing that it improves accuracy without violating preference guarantees on test data.
Author Information
Berk Ustun (Harvard University)
Yang Liu (UCSC)
David Parkes (Harvard University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Fairness without Harm: Decoupled Classifiers with Preference Guarantees »
Thu. Jun 13th 06:35 -- 06:40 PM Room Seaside Ballroom
More from the Same Authors
-
2020 : Contributed Talk: Incentives for Federated Learning: a Hypothesis Elicitation Approach »
Yang Liu · Jiaheng Wei -
2020 : Contributed Talk: Linear Models are Robust Optimal Under Strategic Behavior »
Wei Tang · Chien-Ju Ho · Yang Liu -
2020 : Contributed Talk: From Predictions to Decisions: Using Lookahead Regularization »
Nir Rosenfeld · Sai Srivatsa Ravindranath · David Parkes -
2021 : Linear Classifiers that Encourage Constructive Adaptation »
Yatong Chen · Jialu Wang · Yang Liu -
2021 : When Optimizing f-divergence is Robust with Label Noise »
Jiaheng Wei · Yang Liu -
2022 : Adaptive Data Debiasing Through Bounded Exploration »
Yifan Yang · Yang Liu · Parinaz Naghizadeh -
2023 : To Aggregate or Not? Learning with Separate Noisy Labels »
Jiaheng Wei · Zhaowei Zhu · Tianyi Luo · Ehsan Amid · Abhishek Kumar · Yang Liu -
2023 : Understanding Unfairness via Training Concept Influence »
Yuanshun Yao · Yang Liu -
2023 : Towards an Efficient Algorithm for Time Series Forecasting with Anomalies »
Hao Cheng · Qingsong Wen · Yang Liu · Liang Sun -
2023 Workshop: DMLR Workshop: Data-centric Machine Learning Research »
Ce Zhang · Praveen Paritosh · Newsha Ardalani · Nezihe Merve Gürel · William Gaviria Rojas · Yang Liu · Rotem Dror · Manil Maskey · Lilith Bat-Leah · Tzu-Sheng Kuo · Luis Oala · Max Bartolo · Ludwig Schmidt · Alicia Parrish · Daniel Kondermann · Najoung Kim -
2023 Poster: Oracles & Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning »
Matthias Gerstgrasser · David Parkes -
2023 Poster: Identifiability of Label Noise Transition Matrix »
Yang Liu · Hao Cheng · Kun Zhang -
2023 Poster: Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes »
Zhaowei Zhu · Yuanshun Yao · Jiankai Sun · Hang Li · Yang Liu -
2023 Poster: Model Transferability with Responsive Decision Subjects »
Yatong Chen · Zeyu Tang · Kun Zhang · Yang Liu -
2022 : Model Transferability With Responsive Decision Subjects »
Yang Liu · Yatong Chen · Zeyu Tang · Kun Zhang -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Poster: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Poster: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Spotlight: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Spotlight: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Spotlight: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Learning Representations by Humans, for Humans »
Sophie Hilgard · Nir Rosenfeld · Mahzarin Banaji · Jack Cao · David Parkes -
2021 Spotlight: Learning Representations by Humans, for Humans »
Sophie Hilgard · Nir Rosenfeld · Mahzarin Banaji · Jack Cao · David Parkes -
2021 Poster: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Spotlight: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Poster: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2021 Oral: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2020 Workshop: Incentives in Machine Learning »
Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song -
2020 : Panel 2 »
Deborah Raji · Berk Ustun · Alexandra Chouldechova · Jamelle Watson-Daniels -
2020 : Actionable Recourse in Machine Learning »
Berk Ustun -
2020 Poster: The Intrinsic Robustness of Stochastic Bandits to Strategic Manipulation »
Zhe Feng · David Parkes · Haifeng Xu -
2020 Poster: Predictive Multiplicity in Classification »
Charles Marx · Flavio Calmon · Berk Ustun -
2020 Poster: Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates »
Yang Liu · Hongyi Guo -
2019 Poster: Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions »
Hao Wang · Berk Ustun · Flavio Calmon -
2019 Oral: Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions »
Hao Wang · Berk Ustun · Flavio Calmon -
2019 Poster: Learning to Collaborate in Markov Decision Processes »
Goran Radanovic · Rati Devidze · David Parkes · Adish Singla -
2019 Poster: Optimal Auctions through Deep Learning »
Paul Duetting · Zhe Feng · Harikrishna Narasimhan · David Parkes · Sai Srivatsa Ravindranath -
2019 Oral: Learning to Collaborate in Markov Decision Processes »
Goran Radanovic · Rati Devidze · David Parkes · Adish Singla -
2019 Oral: Optimal Auctions through Deep Learning »
Paul Duetting · Zhe Feng · Harikrishna Narasimhan · David Parkes · Sai Srivatsa Ravindranath