Timezone: »
Motivated by settings in which predictive models may be required to be non-discriminatory with respect to certain attributes (such as race), but even collecting the sensitive attribute may be forbidden or restricted, we initiate the study of fair learning under the constraint of differential privacy. Our first algorithm is a private implementation of the equalized odds post-processing approach of (Hardt et al., 2016). This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of “disparate treatment”. Our second algorithm is a differentially private version of the oracle-efficient in-processing approach of (Agarwal et al., 2018). This algorithm is more complex but need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time.
Author Information
Matthew Jagielski (Northeastern University)
Michael Kearns (University of Pennsylvania)
Jieming Mao (University of Pennsylvania)
Alina Oprea (Northeastern University)
Aaron Roth (University of Pennsylvania)
Saeed Sharifi-Malvajerdi (University of Pennsylvania)
Jonathan Ullman (Northeastern University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Differentially Private Fair Learning »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #134
More from the Same Authors
-
2021 : Membership Inference Attacks are More Powerful Against Updated Models »
Matthew Jagielski · Stanley Wu · Alina Oprea · Jonathan Ullman · Roxana Geambasu -
2021 : Adaptive Machine Unlearning »
Varun Gupta · Christopher Jung · Seth Neel · Aaron Roth · Saeed Sharifi-Malvajerdi · Chris Waites -
2021 : Shuffle Private Stochastic Convex Optimization »
Albert Cheu · Matthew Joseph · Jieming Mao · Binghui Peng -
2021 : Covariance-Aware Private Mean Estimation Without Private Covariance Estimation »
Gavin Brown · Marco Gaboradi · Adam Smith · Jonathan Ullman · Lydia Zakynthinou -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 : TMI! Finetuned Models Spill Secrets from Pretraining »
John Abascal · Stanley Wu · Alina Oprea · Jonathan Ullman -
2023 : Replicable Reinforcement Learning »
ERIC EATON · Marcel Hussing · Michael Kearns · Jessica Sorrell -
2023 Oral: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2023 Poster: From Robustness to Privacy and Back »
Hilal Asi · Jonathan Ullman · Lydia Zakynthinou -
2023 Poster: Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Poster: The Statistical Scope of Multicalibration »
Georgy Noarov · Aaron Roth -
2023 Poster: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2021 : Contributed Talks Session 2 »
Saeed Sharifi-Malvajerdi · Audra McMillan · Ryan McKenna -
2021 Poster: Leveraging Public Data for Practical Private Query Release »
Terrance Liu · Giuseppe Vietri · Thomas Steinke · Jonathan Ullman · Steven Wu -
2021 Spotlight: Leveraging Public Data for Practical Private Query Release »
Terrance Liu · Giuseppe Vietri · Thomas Steinke · Jonathan Ullman · Steven Wu -
2021 Poster: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2021 Oral: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2020 Poster: Private Query Release Assisted by Public Data »
Raef Bassily · Albert Cheu · Shay Moran · Aleksandar Nikolov · Jonathan Ullman · Steven Wu -
2018 Poster: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Oral: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Poster: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2018 Oral: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2017 Poster: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Talk: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Poster: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth -
2017 Talk: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth