Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Continual Learning for Forgetting in Deep Generative Models

Alvin Heng · Harold Soh

Keywords: [ Generative Models ] [ forgetting ]


Abstract:

The recent proliferation of large-scale text-to-image models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learning to selectively forget concepts in pretrained text-to-image generative models. Our method enables controllable forgetting, where a user can specify how a concept should be forgotten. We apply our method to the open-source Stable Diffusion model and focus on tackling the problem of deepfakes, where experiments show that the model effectively forgets the depictions of various celebrities.

Chat is not available.