Timezone: »

 
Spotlight
Practical and Private (Deep) Learning Without Sampling or Shuffling
Peter Kairouz · Brendan McMahan · Shuang Song · Om Dipakbhai Thakkar · Abhradeep Guha Thakurta · Zheng Xu

Thu Jul 22 06:30 PM -- 06:35 PM (PDT) @ None

We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires \emph{privacy amplification by sampling or shuffling} to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification.

Author Information

Peter Kairouz (Google)
Brendan McMahan (Google)
Shuang Song (Google)
Om Thakkar (Google)
Abhradeep Guha Thakurta (Google)
Zheng Xu (Google Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors