Timezone: »

 
Poster
When Does Data Augmentation Help With Membership Inference Attacks?
Yigitcan Kaya · Tudor Dumitras

Thu Jul 22 09:00 PM -- 11:00 PM (PDT) @ Virtual #None

Deep learning models often raise privacy concerns as they leak information about their training data. This leakage enables membership inference attacks (MIA) that can identify whether a data point was in a model's training set. Research shows that some 'data augmentation' mechanisms may reduce the risk by combatting a key factor increasing the leakage, overfitting. While many mechanisms exist, their effectiveness against MIAs and privacy properties have not been studied systematically. Employing two recent MIAs, we explore the lower bound on the risk in the absence of formal upper bounds. First, we evaluate 7 mechanisms and differential privacy, on three image classification tasks. We find that applying augmentation to increase the model's utility does not mitigate the risk and protection comes with a utility penalty. Further, we also investigate why popular label smoothing mechanism consistently amplifies the risk. Finally, we propose 'loss-rank-correlation' (LRC) metric to assess how similar the effects of different mechanisms are. This, for example, reveals the similarity of applying high-intensity augmentation against MIAs to simply reducing the training time. Our findings emphasize the utility-privacy trade-off and provide practical guidelines on using augmentation to manage the trade-off.

Author Information

Yigitcan Kaya (University of Maryland, College Park)

I am a fourth year Ph.D. student in Computer Science at University of Maryland College Park. My research advisor is Prof. Tudor Dumitras. My broad research focus is on adversarial machine learning. Specifically, I develop methods to digest the hidden information within deep neural networks into intuitive and often security-related metrics, such as overthinking. I also have done work in exploring practical threat models against ML systems, such as sneaky poisoning attacks or hardware-based attacks, and I recently started working in ML privacy, including differential privacy and membership inference attacks.

Tudor Dumitras (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors