Timezone: »
A large body of research has shown that complex machine learning models are vulnerable to membership inference attacks. Research on membership inference has so far focused on the case of a single standalone model, while real machine learning pipelines typically update models over time, giving the attacker more information. We show that attackers can exploit this information to carry out more powerful membership inference attacks than they could if they only had access to a single model. Our main contributions are to formalize membership inference attacks in the setting of model updates, suggest new attack strategies to exploit model updates, and validate these strategies both theoretically and empirically.
Author Information
Matthew Jagielski (Northeastern University)
Stanley Wu
Alina Oprea (Northeastern University)
Jonathan Ullman (Northeastern University)
Roxana Geambasu (Columbia University)
More from the Same Authors
-
2021 : Covariance-Aware Private Mean Estimation Without Private Covariance Estimation »
Gavin Brown · Marco Gaboradi · Adam Smith · Jonathan Ullman · Lydia Zakynthinou -
2023 Poster: From Robustness to Privacy and Back »
Hilal Asi · Jonathan Ullman · Lydia Zakynthinou -
2021 Poster: Leveraging Public Data for Practical Private Query Release »
Terrance Liu · Giuseppe Vietri · Thomas Steinke · Jonathan Ullman · Steven Wu -
2021 Spotlight: Leveraging Public Data for Practical Private Query Release »
Terrance Liu · Giuseppe Vietri · Thomas Steinke · Jonathan Ullman · Steven Wu -
2020 Poster: Private Query Release Assisted by Public Data »
Raef Bassily · Albert Cheu · Shay Moran · Aleksandar Nikolov · Jonathan Ullman · Steven Wu -
2019 Poster: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2019 Oral: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman