Timezone: »
We propose, implement, and evaluate a new algo-rithm for releasing answers to very large numbersof statistical queries likek-way marginals, sub-ject to differential privacy. Our algorithm makesadaptive use of a continuous relaxation of thePro-jection Mechanism, which answers queries on theprivate dataset using simple perturbation, and thenattempts to find the synthetic dataset that mostclosely matches the noisy answers. We use a con-tinuous relaxation of the synthetic dataset domainwhich makes the projection loss differentiable,and allows us to use efficient ML optimizationtechniques and tooling. Rather than answering allqueries up front, we make judicious use of ourprivacy budget by iteratively finding queries forwhich our (relaxed) synthetic data has high error,and then repeating the projection. Randomizedrounding allows us to obtain synthetic data in theoriginal schema. We perform experimental evalu-ations across a range of parameters and datasets,and find that our method outperforms existingalgorithms on large query classes.
Author Information
Sergul Aydore (Amazon Web Services)
Sergul Aydore is an applied scientist at Amazon Web Services (AWS). Prior to AWS, Sergul was an Assistant Professor at the department of Electrical and Computer Engineering of Stevens Institute of Technology. She received her PhD degree from the Signal and Image Processing Institute at the University of Southern California in 2014. Her PhD work was on developing robust connectivity measures for neuroimaging data. She was the recipient of the Viterbi School of Engineering Doctoral Fellowship and was recognized as a 2014 USC Ming Hsieh Institute Ph.D. Scholar. Sergul has published in top-tier machine learning conferences such as ICML and NeurIPS on advancing generalization in machine learning models. She also served as an area chair in WiML at NeurIPS 2019. Her research at Stevens was supported by AWS ML Research Awards.
William Brown (Columbia University)
Michael Kearns (University of Pennsylvania)
Krishnaram Kenthapadi (Amazon AWS AI)
Luca Melis (Amazon Web Services)
Aaron Roth (University of Pennsylvania)
Ankit Siva (Amazon)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Oral: Differentially Private Query Release Through Adaptive Projection »
Thu. Jul 22nd 01:00 -- 01:20 PM Room
More from the Same Authors
-
2021 : Adaptive Machine Unlearning »
Varun Gupta · Christopher Jung · Seth Neel · Aaron Roth · Saeed Sharifi-Malvajerdi · Chris Waites -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 : Replicable Reinforcement Learning »
ERIC EATON · Marcel Hussing · Michael Kearns · Jessica Sorrell -
2023 Oral: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2023 Poster: Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Poster: The Statistical Scope of Multicalibration »
Georgy Noarov · Aaron Roth -
2023 Poster: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2023 Poster: Federated Linear Contextual Bandits with User-level Differential Privacy »
Ruiquan Huang · Huanyu Zhang · Luca Melis · Milan Shen · Meisam Hejazinia · Jing Yang -
2023 Tutorial: Responsible AI for Generative AI in Practice: Lessons Learned and Open Challenges »
Krishnaram Kenthapadi · Hima Lakkaraju · Nazneen Rajani -
2022 Poster: Generating Distributional Adversarial Examples to Evade Statistical Detectors »
Yigitcan Kaya · Muhammad Bilal Zafar · Sergul Aydore · Nathalie Rauschmayr · Krishnaram Kenthapadi -
2022 Spotlight: Generating Distributional Adversarial Examples to Evade Statistical Detectors »
Yigitcan Kaya · Muhammad Bilal Zafar · Sergul Aydore · Nathalie Rauschmayr · Krishnaram Kenthapadi -
2021 : Key Takeaways, Conclusion, and Discussion (including Q&A) »
Krishnaram Kenthapadi -
2021 : Responsible AI Case Studies at Amazon »
Krishnaram Kenthapadi -
2021 : Responsible AI Case Studies at LinkedIn »
Krishnaram Kenthapadi -
2021 : Introduction and Brief Overview of Responsible AI »
Krishnaram Kenthapadi -
2021 Tutorial: Responsible AI in Industry: Practical Challenges and Lessons Learned »
Krishnaram Kenthapadi · Ben Packer · Mehrnoosh Sameki · Nashlie Sephus -
2021 : Opening remarks »
Krishnaram Kenthapadi -
2019 Poster: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2019 Oral: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2019 Poster: Feature Grouping as a Stochastic Regularizer for High-Dimensional Structured Data »
Sergul Aydore · Thirion Bertrand · Gael Varoquaux -
2019 Oral: Feature Grouping as a Stochastic Regularizer for High-Dimensional Structured Data »
Sergul Aydore · Thirion Bertrand · Gael Varoquaux -
2018 Poster: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Oral: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Poster: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2018 Oral: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2017 Poster: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Talk: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Poster: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth -
2017 Talk: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth