Timezone: »
Poster
TAN Without a Burn: Scaling Laws of DP-SGD
Tom Sander · Pierre Stock · Alexandre Sablayrolles
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps. These techniques require much more computing resources than their non-private counterparts, shifting the traditional privacy-accuracy trade-off to a privacy-accuracy-compute trade-off and making hyper-parameter search virtually impossible for realistic scenarios. In this work, we decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements. We first use the tools of Renyi Differential Privacy (RDP) to highlight that the privacy budget, when not overcharged, only depends on the total amount of noise (TAN) injected throughout training. We then derive scaling laws for training models with DP-SGD to optimize hyper-parameters with more than a $100\times$ reduction in computational budget. We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a $+9$ points gain in top-1 accuracy for a privacy budget $\varepsilon=8$.
Author Information
Tom Sander (Facebook)
Pierre Stock (Facebook)
Alexandre Sablayrolles (Facebook AI)
More from the Same Authors
-
2023 : Green Federated Learning »
Ashkan Yousefpour · Shen Guo · Ashish Shenoy · Sayan Ghosh · Pierre Stock · Kiwan Maeng · Schalk-Willem Krüger · Michael Rabbat · Carole-Jean Wu · Ilya Mironov -
2023 Poster: Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design »
Chuan Guo · Kamalika Chaudhuri · Pierre Stock · Michael Rabbat -
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi