Skip to yearly menu bar Skip to main content


Poster

Rethinking Adversarial Robustness in the Context of the Right to be Forgotten

Chenxu Zhao · Wei Qian · Yangyi Li · Aobo Chen · Mengdi Huai

Hall C 4-9 #910
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

The past few years have seen an intense research interest in the practical needs of the "right to be forgotten", which has motivated researchers to develop machine unlearning methods to unlearn a fraction of training data and its lineage. While existing machine unlearning methods prioritize the protection of individuals' private data, they overlook investigating the unlearned models' susceptibility to adversarial attacks and security breaches. In this work, we uncover a novel security vulnerability of machine unlearning based on the insight that adversarial vulnerabilities can be bolstered, especially for adversarially robust models. To exploit this observed vulnerability, we propose a novel attack called Adversarial Unlearning Attack (AdvUA), which aims to generate a small fraction of malicious unlearning requests during the unlearning process. AdvUA causes a significant reduction of adversarial robustness in the unlearned model compared to the original model, providing an entirely new capability for adversaries that is infeasible in conventional machine learning pipelines. Notably, we also show that AdvUA can effectively enhance model stealing attacks by extracting additional decision boundary information, further emphasizing the breadth and significance of our research. We also conduct both theoretical analysis and computational complexity of AdvUA. Extensive numerical studies are performed to demonstrate the effectiveness and efficiency of the proposed attack.

Chat is not available.