Timezone: »

Membership Inference Attacks are More Powerful Against Updated Models
Matthew Jagielski · Stanley Wu · Alina Oprea · Jonathan Ullman · Roxana Geambasu

A large body of research has shown that complex machine learning models are vulnerable to membership inference attacks. Research on membership inference has so far focused on the case of a single standalone model, while real machine learning pipelines typically update models over time, giving the attacker more information. We show that attackers can exploit this information to carry out more powerful membership inference attacks than they could if they only had access to a single model. Our main contributions are to formalize membership inference attacks in the setting of model updates, suggest new attack strategies to exploit model updates, and validate these strategies both theoretically and empirically.

Author Information

Matthew Jagielski (Northeastern University)
Stanley Wu
Alina Oprea (Northeastern University)
Jonathan Ullman (Northeastern University)
Roxana Geambasu (Columbia University)

More from the Same Authors