Timezone: »

 
DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
Archit Uniyal · Rakshit Naidu · Sasikanth Kotti · Patrik Joslin Kenfack · Sahib Singh · FatemehSadat Mireshghallah

Recent advances in differentially private deep learning have demonstrated that application of differential privacy-- specifically the DP-SGD algorithm-- has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones. In this work, we aim to compare PATE, another mechanism for training deep learning models using differential privacy, with DP-SGD in terms of fairness. We show that PATE does have a disparate impact too, however, it is much less severe than DP-SGD. We draw insights from this observation on what might be promising directions in achieving better fairness-privacy trade-offs.

Author Information

Archit Uniyal (Panjab University, Chandigarh, India)

Machine learning and deep learning enthusiast exploring different fields of ML and DL. I am a researcher trying to find better algorithms to solve our day to day problems.

Rakshit Naidu (Carnegie Mellon University)
Sasikanth Kotti (IIT Jodhpur)
Patrik Joslin Kenfack (Innopolis University)
Sahib Singh (Ford)

I hold a Master's Degree in Analytics & Data Science from Carnegie Mellon University and am currently working as a Research Engineer-ML/AI at Ford Research, USA. I have previously worked as a software engineer at Google NYC (Google Cloud Team). I currently have one paper under review at this year's ICML workshop. I also hold an additional current position as a Research Scientist at OpenMined, an open-source community, lead by Andrew Trask, focusing on the intersection of ML & Privacy.

FatemehSadat Mireshghallah (University of California San Diego)

More from the Same Authors