Skip to yearly menu bar Skip to main content


Oral

Bounding Training Data Reconstruction in Private (Deep) Learning

Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten

Ballroom 1 & 2
[ ] [ Livestream: Visit Social Aspects ]

Abstract:

Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods---Renyi differential privacy and Fisher information leakage---both offer strong semantic protection against data reconstruction attacks.

Chat is not available.