Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Generative AI and Law (GenLaw ’24)

Capacity Control is an Effective Memorization Mitigation Mechanism

Raman Dutt · Pedro Sanchez · Ondrej Bohdal · Sotirios Tsaftaris · Timothy Hospedales


Abstract:

Diffusion models show a remarkable ability to generate images that closely mirror the training distribution. However, these models are prone to training data memorization, leading to significant privacy, ethical, and legal concerns, particularly in sensitive fields such as medical imaging. We hypothesize that memorization is driven by the overparameterization of deep models, suggesting that regularizing model capacity during fine-tuning could be an effective mitigation strategy. Parameter-efficient fine-tuning (PEFT) methods offer a promising approach to capacity control by selectively updating specific parameters. In this work, we show that adopting PEFT for adapting a pre-trained diffusion model to a downstream domain reduces model capacity sufficiently for significantly reducing memorization while improving the generation quality. Furthermore, we show that PEFT can also be integrated with existing memorization alleviation methods for further mitigation.

Chat is not available.