Timezone: »

 
Poster
Label-Only Membership Inference Attacks
Christopher Choquette-Choo · Florian Tramer · Nicholas Carlini · Nicolas Papernot

Thu Jul 22 09:00 AM -- 11:00 AM (PDT) @

Membership inference is one of the simplest privacy threats faced by machine learning models that are trained on private sensitive data. In this attack, an adversary infers whether a particular point was used to train the model, or not, by observing the model's predictions. Whereas current attack methods all require access to the model's predicted confidence score, we introduce a label-only attack that instead evaluates the robustness of the model's predicted (hard) labels under perturbations of the input, to infer membership. Our label-only attack is not only as-effective as attacks requiring access to confidence scores, it also demonstrates that a class of defenses against membership inference, which we call ``confidence masking'' because they obfuscate the confidence scores to thwart attacks, are insufficient to prevent the leakage of private information. Our experiments show that training with differential privacy or strong L2 regularization are the only current defenses that meaningfully decrease leakage of private information, even for points that are outliers of the training distribution.

Author Information

Christopher Choquette-Choo (Google)
Florian Tramer (Stanford University)
Nicholas Carlini (Google)
Nicolas Papernot (University of Toronto and Vector Institute)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors