Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Theory and Practice of Differential Privacy

Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty

Moritz Knolle · Alexander Ziller · Dmitrii Usynin · Rickmer Braren · Marcus Makowski · Daniel Rueckert · Georgios Kaissis


Abstract:

We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and exploit parallels between stochastic gradient Langevin dynamics, a scalable Bayesian inference technique for training deep neural networks,and DP-SGD, in order to train differentially private, Bayesian neural networks with minor adjustments to the original (DP-SGD) algorithm.Our approach provides considerably more reliable uncertainty estimates than DP-SGD, as demonstrated empirically by a reduction in expected calibration error (MNIST∼5-fold, Pediatric Pneumonia Dataset∼2-fold).

Chat is not available.