Timezone: »

 
Poster
Bounding Training Data Reconstruction in Private (Deep) Learning
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #918

Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods---Renyi differential privacy and Fisher information leakage---both offer strong semantic protection against data reconstruction attacks.

Author Information

Chuan Guo (Meta AI)
Brian Karrer (Meta)
Kamalika Chaudhuri (UCSD and Facebook AI Research)
Laurens van der Maaten (Facebook AI Research)

Laurens van der Maaten is a Research Director at Meta AI Research in New York. Prior, he worked as an Assistant Professor at Delft University of Technology (The Netherlands) and as a post-doctoral researcher at University of California, San Diego. He received his PhD from Tilburg University (The Netherlands) in 2009. His work received Best Paper Awards at CVPR 2017 and UAI 2021. He is an editorial board member of IEEE Transactions of Pattern Analysis and Machine Intelligence and is regularly serving as area chair for the NeurIPS, ICML, and CVPR conferences. Laurens is interested in a variety of topics in machine learning and computer vision.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors