Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild

Jogging the Memory of Unlearned Models Through Targeted Relearning Attacks

Shengyuan Hu · Yiwei Fu · Steven Wu · Virginia Smith

Keywords: [ unlearning ] [ Large Language Model ]


Abstract:

Machine unlearning is a promising approach to mitigate undesirable memorization of training data in ML models. However, in this work we show that existing approaches for unlearning in LLMs are surprisingly susceptible to a simple set of targeted relearning attacks. With access to only a small and potentially loosely related set of data, we find that we can ‘jog’ the memory of unlearned models to reverse the effects of unlearning. We formalize this unlearning-relearning pipeline, explore the attack across three popular unlearning benchmarks, and discuss future directions and guidelines that result from our study.

Chat is not available.