Timezone: »
Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners. To this end, FL requires the data owners to perform training locally and share the gradients or weight updates (instead of the private inputs) with the central server, which are then securely aggregated over multiple data owners. Although aggregation by itself does not offer provable privacy protection, prior work suggested that if the batch size is sufficiently large the aggregation may be secure enough. In this paper, we propose the Cocktail Party Attack (CPA) that, contrary to prior belief, is able to recover the private inputs from gradients/weight updates aggregated over as many as 1024 samples. CPA leverages the crucial insight that aggregate gradients from a fully connected (FC) layer is a linear combination of its inputs, which allows us to frame gradient inversion as a blind source separation (BSS) problem. We adapt independent component analysis (ICA)---a classic solution to the BSS problem---to recover private inputs for FC and convolutional networks, and show that CPA significantly outperforms prior gradient inversion attacks, scales to ImageNet-sized inputs, and works on large batch sizes of up to 1024.
Author Information
Sanjay Kariyappa (JP Morgan Chase)
Chuan Guo (Meta AI)
Kiwan Maeng (Pennsylvania State University)
Wenjie Xiong (Virginia Polytechnic Institute and State University)
G. Edward Suh (Meta AI)
Moinuddin Qureshi (Georgia Institute of Technology)
Hsien-Hsin Sean Lee (Intel)
More from the Same Authors
-
2023 : Machine Learning with Feature Differential Privacy »
Saeed Mahloujifar · Chuan Guo · G. Edward Suh · Kamalika Chaudhuri -
2023 : Green Federated Learning »
Ashkan Yousefpour · Shen Guo · Ashish Shenoy · Sayan Ghosh · Pierre Stock · Kiwan Maeng · Schalk-Willem Krüger · Michael Rabbat · Carole-Jean Wu · Ilya Mironov -
2023 Poster: Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design »
Chuan Guo · Kamalika Chaudhuri · Pierre Stock · Michael Rabbat -
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi -
2022 Poster: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 Oral: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 : Q&A and Discussion »
Chuan Guo · Reza Shokri -
2022 : Conclusion and Future Outlook »
Chuan Guo · Reza Shokri -
2022 : Privacy and Data Reconstruction »
Chuan Guo -
2022 Tutorial: Quantitative Reasoning About Data Privacy in Machine Learning »
Chuan Guo · Reza Shokri -
2022 : Opening Remarks »
Chuan Guo · Reza Shokri -
2021 Poster: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2021 Spotlight: Making Paper Reviewing Robust to Bid Manipulation Attacks »
Ruihan Wu · Chuan Guo · Felix Wu · Rahul Kidambi · Laurens van der Maaten · Kilian Weinberger -
2020 Poster: Certified Data Removal from Machine Learning Models »
Chuan Guo · Tom Goldstein · Awni Hannun · Laurens van der Maaten -
2019 Poster: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2019 Oral: Simple Black-box Adversarial Attacks »
Chuan Guo · Jacob Gardner · Yurong You · Andrew Wilson · Kilian Weinberger -
2017 Poster: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger -
2017 Talk: On Calibration of Modern Neural Networks »
Chuan Guo · Geoff Pleiss · Yu Sun · Kilian Weinberger