Timezone: »
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and exploit parallels between stochastic gradient Langevin dynamics, a scalable Bayesian inference technique for training deep neural networks,and DP-SGD, in order to train differentially private, Bayesian neural networks with minor adjustments to the original (DP-SGD) algorithm.Our approach provides considerably more reliable uncertainty estimates than DP-SGD, as demonstrated empirically by a reduction in expected calibration error (MNIST∼5-fold, Pediatric Pneumonia Dataset∼2-fold).
Author Information
Moritz Knolle (Technical University Munich)
Alexander Ziller (Technische Universität München)
Dmitrii Usynin (Imperial College London / TU Munich)
Rickmer Braren
Marcus Makowski
Daniel Rueckert (Imperial College London)
Georgios Kaissis (Technical University Munich)
More from the Same Authors
-
2021 : Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation »
Alexander Ziller · Dmitrii Usynin · Moritz Knolle · Kritika Prakash · Andrew Trask · Marcus Makowski · Rickmer Braren · Daniel Rueckert · Georgios Kaissis -
2018 Poster: Semi-Supervised Learning via Compact Latent Space Clustering »
Konstantinos Kamnitsas · Daniel C. Castro · Loic Le Folgoc · Ian Walker · Ryutaro Tanno · Daniel Rueckert · Ben Glocker · Antonio Criminisi · Aditya Nori -
2018 Oral: Semi-Supervised Learning via Compact Latent Space Clustering »
Konstantinos Kamnitsas · Daniel C. Castro · Loic Le Folgoc · Ian Walker · Ryutaro Tanno · Daniel Rueckert · Ben Glocker · Antonio Criminisi · Aditya Nori