Timezone: »
Membership inference determines, given a sample and trained parameters of a machine learning model, whether the sample was part of the training set. In this paper, we derive the optimal strategy for membership inference with a few assumptions on the distribution of the parameters. We show that optimal attacks only depend on the loss function, and thus black-box attacks are as good as white-box attacks. As the optimal strategy is not tractable, we provide approximations of it leading to several inference methods, and show that existing membership inference methods are other approximations as well. Our membership attacks outperform the state of the art in various settings, ranging from a simple logistic regression to more complex architectures and datasets, such as ResNet-101 and Imagenet.
Author Information
Alexandre Sablayrolles (Facebook AI Research)
Douze Matthijs (Facebook AI Research)
Cordelia Schmid (Inria/Google)
Yann Ollivier (Facebook Artificial Intelligence Research)
Herve Jegou (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: White-box vs Black-box: Bayes Optimal Strategies for Membership Inference »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #172
More from the Same Authors
-
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi -
2023 Poster: TAN Without a Burn: Scaling Laws of DP-SGD »
Tom Sander · Pierre Stock · Alexandre Sablayrolles -
2021 Poster: Training data-efficient image transformers & distillation through attention »
Hugo Touvron · Matthieu Cord · Douze Matthijs · Francisco Massa · Alexandre Sablayrolles · Herve Jegou -
2021 Spotlight: Training data-efficient image transformers & distillation through attention »
Hugo Touvron · Matthieu Cord · Douze Matthijs · Francisco Massa · Alexandre Sablayrolles · Herve Jegou -
2021 Poster: Goal-Conditioned Reinforcement Learning with Imagined Subgoals »
Elliot Chane-Sane · Cordelia Schmid · Ivan Laptev -
2021 Spotlight: Goal-Conditioned Reinforcement Learning with Imagined Subgoals »
Elliot Chane-Sane · Cordelia Schmid · Ivan Laptev -
2020 Poster: Radioactive data: tracing through training »
Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Herve Jegou -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Poster: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Poster: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2018 Poster: Mixed batches and symmetric discriminators for GAN training »
Thomas LUCAS · Corentin Tallec · Yann Ollivier · Jakob Verbeek -
2018 Oral: Mixed batches and symmetric discriminators for GAN training »
Thomas LUCAS · Corentin Tallec · Yann Ollivier · Jakob Verbeek -
2017 Poster: Efficient softmax approximation for GPUs »
Edouard Grave · Armand Joulin · Moustapha Cisse · David Grangier · Herve Jegou -
2017 Talk: Efficient softmax approximation for GPUs »
Edouard Grave · Armand Joulin · Moustapha Cisse · David Grangier · Herve Jegou