Timezone: »
Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods---Renyi differential privacy and Fisher information leakage---both offer strong semantic protection against data reconstruction attacks.
Author Information
Chuan Guo (Meta AI)
Brian Karrer (Meta)
Kamalika Chaudhuri (UCSD and Facebook AI Research)
Laurens van der Maaten (Facebook AI Research)
Laurens van der Maaten is a Research Director at Meta AI Research in New York. Prior, he worked as an Assistant Professor at Delft University of Technology (The Netherlands) and as a post-doctoral researcher at University of California, San Diego. He received his PhD from Tilburg University (The Netherlands) in 2009. His work received Best Paper Awards at CVPR 2017 and UAI 2021. He is an editorial board member of IEEE Transactions of Pattern Analysis and Machine Intelligence and is regularly serving as area chair for the NeurIPS, ICML, and CVPR conferences. Laurens is interested in a variety of topics in machine learning and computer vision.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Bounding Training Data Reconstruction in Private (Deep) Learning »
Tue. Jul 19th through Wed the 20th Room Hall E #918
More from the Same Authors
-
2021 : Understanding Instance-based Interpretability of Variational Auto-Encoders »
· Zhifeng Kong · Kamalika Chaudhuri -
2021 : Privacy Amplification by Bernoulli Sampling »
Jacob Imola · Kamalika Chaudhuri -
2021 : A Shuffling Framework For Local Differential Privacy »
Casey M Meehan · Amrita Roy Chowdhury · Kamalika Chaudhuri · Somesh Jha -
2021 : Privacy Amplification by Subsampling in Time Domain »
Tatsuki Koga · Casey M Meehan · Kamalika Chaudhuri -
2022 : Understanding Rare Spurious Correlations in Neural Networks »
Yao-Yuan Yang · Chi-Ning Chou · Kamalika Chaudhuri -
2023 Poster: Data-Copying in Generative Models: A Formal Framework »
Robi Bhattacharjee · Sanjoy Dasgupta · Kamalika Chaudhuri -
2023 Poster: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2023 Poster: Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design »
Chuan Guo · Kamalika Chaudhuri · Pierre Stock · Michael Rabbat -
2023 Poster: A Two-Stage Active Learning Algorithm for k-Nearest Neighbors »
Nicholas Rittler · Kamalika Chaudhuri -
2023 Poster: Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning Using Independent Component Analysis »
Sanjay Kariyappa · Chuan Guo · Kiwan Maeng · Wenjie Xiong · G. Edward Suh · Moinuddin Qureshi · Hsien-Hsin Sean Lee -
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi -
2023 Oral: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2022 Poster: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 Spotlight: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 : Q&A and Discussion »
Chuan Guo · Reza Shokri -
2022 : Conclusion and Future Outlook »
Chuan Guo · Reza Shokri -
2022 : Privacy and Data Reconstruction »
Chuan Guo -
2022 Tutorial: Quantitative Reasoning About Data Privacy in Machine Learning »
Chuan Guo · Reza Shokri -
2022 : Opening Remarks »
Chuan Guo · Reza Shokri -
2021 : Discussion Panel #2 »
Bo Li · Nicholas Carlini · Andrzej Banburski · Kamalika Chaudhuri · Will Xiao · Cihang Xie -
2021 : Invited Talk #9 »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Live Panel Discussion »
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit -
2021 Poster: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Poster: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2021 Spotlight: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2021 Spotlight: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Poster: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2021 Spotlight: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2020 Poster: Certified Data Removal from Machine Learning Models »
Chuan Guo · Tom Goldstein · Awni Hannun · Laurens van der Maaten -
2020 Poster: When are Non-Parametric Methods Robust? »
Robi Bhattacharjee · Kamalika Chaudhuri -
2019 Poster: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2019 Oral: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2019 Talk: Opening Remarks »
Kamalika Chaudhuri · Ruslan Salakhutdinov -
2018 Poster: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Poster: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2018 Oral: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Oral: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2017 Workshop: Picky Learners: Choosing Alternative Ways to Process Data. »
Corinna Cortes · Kamalika Chaudhuri · Giulia DeSalvo · Ningshan Zhang · Chicheng Zhang -
2017 Poster: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger -
2017 Talk: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger -
2017 Poster: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan -
2017 Talk: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan