Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Theory and Practice of Differential Privacy

Membership Inference Attacks are More Powerful Against Updated Models

Matthew Jagielski · Stanley Wu · Alina Oprea · Jonathan Ullman · Roxana Geambasu


Abstract:

A large body of research has shown that complex machine learning models are vulnerable to membership inference attacks. Research on membership inference has so far focused on the case of a single standalone model, while real machine learning pipelines typically update models over time, giving the attacker more information. We show that attackers can exploit this information to carry out more powerful membership inference attacks than they could if they only had access to a single model. Our main contributions are to formalize membership inference attacks in the setting of model updates, suggest new attack strategies to exploit model updates, and validate these strategies both theoretically and empirically.

Chat is not available.