Timezone: »
Recent advances in differentially private deep learning have demonstrated that application of differential privacy-- specifically the DP-SGD algorithm-- has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones. In this work, we aim to compare PATE, another mechanism for training deep learning models using differential privacy, with DP-SGD in terms of fairness. We show that PATE does have a disparate impact too, however, it is much less severe than DP-SGD. We draw insights from this observation on what might be promising directions in achieving better fairness-privacy trade-offs.
Author Information
Archit Uniyal (Panjab University, Chandigarh, India)
Machine learning and deep learning enthusiast exploring different fields of ML and DL. I am a researcher trying to find better algorithms to solve our day to day problems.
Rakshit Naidu (Carnegie Mellon University)
Sasikanth Kotti (IIT Jodhpur)
Patrik Joslin Kenfack (Innopolis University)
Sahib Singh (Ford)
I hold a Master's Degree in Analytics & Data Science from Carnegie Mellon University and am currently working as a Research Engineer-ML/AI at Ford Research, USA. I have previously worked as a software engineer at Google NYC (Google Cloud Team). I currently have one paper under review at this year's ICML workshop. I also hold an additional current position as a Research Scientist at OpenMined, an open-source community, lead by Andrew Trask, focusing on the intersection of ML & Privacy.
FatemehSadat Mireshghallah (University of California San Diego)
More from the Same Authors
-
2021 : Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples »
Nelson Manohar-Alers · Ryan Feng · Sahib Singh · Jiguo Song · Atul Prakash -
2021 : Adversarial Stacked Auto-Encoders for Fair Representation Learning »
Patrik Joslin Kenfack · Adil Khan · Rasheed Hussain -
2021 : Benchmarking Differential Privacy and Federated Learning for BERT Models »
Priyam Basu · Rakshit Naidu · Zumrut Muftuoglu · Sahib Singh · FatemehSadat Mireshghallah -
2021 : Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning »
Rakshit Naidu · Harshita Diddee · Ajinkya Mulay · Vardhan Aleti · Krithika Ramesh · Ahmed Zamzam -
2021 : Adversarial Stacked Auto-Encoders for Fair Representation Learning »
Patrik Joslin Kenfack · Adil Khan · Rasheed Hussain -
2022 : Memorization in NLP Fine-tuning Methods »
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick -
2023 : Talk »
FatemehSadat Mireshghallah -
2023 Workshop: Generative AI and Law (GenLaw) »
Katherine Lee · A. Feder Cooper · FatemehSadat Mireshghallah · Madiha Zahrah · James Grimmelmann · David Mimno · Deep Ganguli · Ludwig Schubert -
2020 Poster: Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks »
Ahmed T. Elthakeb · Prannoy Pilligundla · FatemehSadat Mireshghallah · Alexander Cloninger · Hadi Esmaeilzadeh