Skip to yearly menu bar Skip to main content


Poster

PID: Prompt-Independent Data Protection Against Latent Diffusion Models

Ang Li · Yichuan Mo · Mingjie Li · Yisen Wang


Abstract:

The few-shot fine-tuning of Latent Diffusion Models (LDMs) has enabled the generative model to grasp novel concepts from a limited number of images. This capability, however, raises critical concerns about civil privacy, given the vast amount of personal images accessible online. While several defense methods have been developed to prevent the misuse of such data exploitation (illegal use of individual private data) by LDMs, they assume that the textual prompts utilized during the data protection phase match those used in data exploitation scenarios. In this paper, we first empirically demonstrate that this assumption leads to a substantial reduction in protection effectiveness when there is a discrepancy between the textual conditions applied by protectors and exploiters, indicating a possibly false sense of safety. Furthermore, considering the visual encoder's independence from textual prompts, we delve into the visual encoder and provide a thorough investigation of how manipulating it influences the few-shot fine-tuning process of LDMs. Drawing on these insights, we propose a simple yet effective Prompt-Independent Defense (PID) to safeguard privacy against LDMs. PID not only acts as a strong privacy shield alone but also enhances the efficacy of existing protection methods when being integrated. We believe our studies, together with the comprehensive understanding and new defense method, offer a notable advance toward reliable data protection against LDMs.

Live content is unavailable. Log in and register to view live content