Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: 2nd Workshop on Generative AI and Law (GenLaw ’24)

Machine Unlearning Fails to Remove Data Poisoning Attacks

Martin Pawelczyk · Ayush Sekhari · Jimmy Di · Yiwei Lu · Gautam Kamath · Seth Neel


Abstract:

We revisit the efficacy of several practical methods for machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of training on poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of evaluation settings (e.g., alleviating membership inference attacks), they fail to remove the effects of data poisoning, across a variety of types of poisoning attacks (indiscriminate, targeted) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, are required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned datapoints without having to retrain, our work suggests that these methods are not yet ``ready for prime time,'' and currently provide limited benefit over retraining.

Chat is not available.