Timezone: »
Machine learning algorithms leak a significant amount of information about their training data. A legitimate user of a model can reconstruct sensitive information about the training data, by having access to its predictions or parameters. Given that all privacy policies and regulations require privacy auditing of (machine learning) algorithms, we are interested in a generic approach to perform quantitative reasoning about the privacy risks of various machine learning algorithms. Differentially private machine learning is currently the most widely accepted framework for privacy-preserving machine learning on sensitive data. The framework prescribes a rigorous accounting of information leakage about the training data through the learning algorithm using statistical divergences. However, it is often difficult to interpret this mathematical guarantee in terms of how a randomized algorithm limits how much an adversary can infer about one's data. For example, if a model is trained on my private emails containing personal information such as credit card number, does DP epsilon = 10 prevent my credit card number from being leaked by the model? If I am a patient participating in a personalized cancer treatment prediction study, does DP epsilon = 5 prevent others from identifying my membership (and hence my cancer positivity) in this study? In this tutorial, we present a unified view of recent works that translate privacy bounds to practical inference attacks and provide a rigorous quantitative understanding of DP machine learning. The objective is to link the underlying relation between privacy concepts, inference attacks, protection mechanisms, and tools, and to make the whole field more understandable to ML researchers and engineers.
Mon 6:30 a.m. - 6:35 a.m.
|
Opening Remarks
(
Live presentation
)
SlidesLive Video » |
Chuan Guo · Reza Shokri 🔗 |
Mon 6:35 a.m. - 7:20 a.m.
|
Privacy and Membership inference
(
Live presentation
)
SlidesLive Video » |
Reza Shokri 🔗 |
Mon 7:20 a.m. - 7:30 a.m.
|
Break
|
🔗 |
Mon 7:30 a.m. - 8:15 a.m.
|
Privacy and Data Reconstruction
(
Live presentation
)
SlidesLive Video » |
Chuan Guo 🔗 |
Mon 8:15 a.m. - 8:20 a.m.
|
Conclusion and Future Outlook
(
Live presentation
)
|
Chuan Guo · Reza Shokri 🔗 |
Mon 8:20 a.m. - 8:30 a.m.
|
Q&A and Discussion
(
Discussion
)
SlidesLive Video » |
Chuan Guo · Reza Shokri 🔗 |
Author Information
Chuan Guo (Meta AI)
Reza Shokri (National University of Singapore)
More from the Same Authors
-
2023 : Machine Learning with Feature Differential Privacy »
Saeed Mahloujifar · Chuan Guo · G. Edward Suh · Kamalika Chaudhuri -
2023 Poster: Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design »
Chuan Guo · Kamalika Chaudhuri · Pierre Stock · Michael Rabbat -
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi -
2023 Poster: Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning Using Independent Component Analysis »
Sanjay Kariyappa · Chuan Guo · Kiwan Maeng · Wenjie Xiong · G. Edward Suh · Moinuddin Qureshi · Hsien-Hsin Sean Lee -
2022 Poster: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 Oral: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 : Q&A and Discussion »
Chuan Guo · Reza Shokri -
2022 : Conclusion and Future Outlook »
Chuan Guo · Reza Shokri -
2022 : Privacy and Data Reconstruction »
Chuan Guo -
2022 : Privacy and Membership inference »
Reza Shokri -
2022 : Opening Remarks »
Chuan Guo · Reza Shokri -
2021 Poster: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2021 Spotlight: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2020 Poster: Certified Data Removal from Machine Learning Models »
Chuan Guo · Tom Goldstein · Awni Hannun · Laurens van der Maaten -
2019 Poster: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2019 Oral: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2017 Poster: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger -
2017 Talk: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger