Skip to yearly menu bar Skip to main content


Poster

Poisoning Generative Replay in Continual Learning to Promote Forgetting

Siteng Kang · Zhan Shi · Xinhua Zhang

Exhibit Hall 1 #733
[ ]
[ PDF [ Poster

Abstract:

Generative models have grown into the workhorse of many state-of-the-art machine learning methods. However, their vulnerability under poisoning attacks has been largely understudied. In this work, we investigate this issue in the context of continual learning, where generative replayers are utilized to tackle catastrophic forgetting. By developing a novel customization of dirty-label input-aware backdoors to the online setting, our attacker manages to stealthily promote forgetting while retaining high accuracy at the current task and sustaining strong defenders. Our approach taps into an intriguing property of generative models, namely that they cannot well capture input-dependent triggers. Experiments on four standard datasets corroborate the poisoner's effectiveness.

Chat is not available.