Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Theory and Practice of Differential Privacy

Improving Privacy-Preserving Deep Learning With Immediate Sensitivity

Timothy Stevens · David Darais · Ben U Gelman · David Slater · Joseph Near


Abstract:

There is growing evidence that complex neural networks memorize their training data, and that privacy attacks (e.g. membership inference) allow an adversary to recover that training data from the model. Differential privacy provides a defense against these attacks, but reduces accuracy. We present a new defense against privacy attacks in deep learning, inspired by differential privacy, that scales the noise added to gradients using immediate sensitivity---a novel approximation of the local sensitivity of a gradient calculation. Our empirical evaluation suggests that our approach produces higher accuracy for a desired level of privacy than gradient-clipping-based differentially private training.

Chat is not available.