Skip to yearly menu bar Skip to main content


Poster

Rethinking DP-SGD in Discrete Domain: Exploring Logistic Distribution in the Realm of signSGD

Jonggyu Jang · Seongjin Hwang · Hyun Jong Yang

Hall C 4-9 #2402
[ ]
Tue 23 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Deep neural networks (DNNs) have a risk of remembering sensitive data from their training datasets, inadvertently leading to substantial information leakage through privacy attacks like membership inference attacks. DP-SGD is a simple but effective defense method, incorporating Gaussian noise into gradient updates to safeguard sensitive information. With the prevalence of large neural networks, DP-signSGD, a variant of DP-SGD, has emerged, aiming to curtail memory usage while maintaining security. However, it is noteworthy that most DP-signSGD algorithms default to Gaussian noise, suitable only for DP-SGD, without scant discussion of its appropriateness for signSGD. Our study delves into an intriguing question: "Can we find a more efficient substitute for Gaussian noise to secure privacy in DP-signSGD?" We propose an answer with a Logistic mechanism, which conforms to signSGD principles and is interestingly evolved from an exponential mechanism. In this paper, we provide both theoretical and experimental evidence showing that our method surpasses DP-signSGD.

Live content is unavailable. Log in and register to view live content