Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Local Differential Privacy with Entropic Wasserstein Distance

Daria Reshetova · Wei-Ning Chen · Ayfer Ozgur

Keywords: [ Local Differential Privacy ] [ GAN ] [ Entropic regularization ]


Abstract:

Local differential privacy (LDP) is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GAN) on differentially privatized data. We show that entropic regularization of the Wasserstein distance - a popular regularization method in the literature that has been often leveraged for its computational benefits - can be used to denoise the data distribution when data is privatized by popular additive noise mechanisms, such as Laplace and Gaussian. This combination uniquely enables the mitigation of both the regularization bias and the effects of privatization noise, thereby enhancing the overall efficacy of the model. We analyze the proposed method, provide sample complexity results and experimental evidence to support its efficacy.

Chat is not available.