Timezone: »
Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes. One particularly challenging class of neural networks are the wide ones, such as those deployed for NLP or recommender systems.
Observing that these models share something in common---an embedding layer that reduces the dimensionality of the input---we focus on developing a general approach towards training these models that takes advantage of the sparsity of the gradients. We propose a novel algorithm for privately training neural networks. Furthermore, we provide an empirical study of a DP wide neural network on a real-world dataset, which has been rarely explored in the previous work.
Author Information
Huanyu Zhang (Facebook)
Ilya Mironov (Facebook AI)
Meisam Hejazinia (University of Texas, Dallas)
More from the Same Authors
-
2021 : Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data »
Gautam Kamath · Xingtu Liu · Huanyu Zhang -
2023 Poster: Federated Linear Contextual Bandits with User-level Differential Privacy »
Ruiquan Huang · Huanyu Zhang · Meisam Hejazinia · Luca Melis · Milan Shen · Jing Yang -
2022 Poster: Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data »
Gautam Kamath · Xingtu Liu · Huanyu Zhang -
2022 Oral: Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data »
Gautam Kamath · Xingtu Liu · Huanyu Zhang -
2020 : Keynote Session 4: The Shuffle Model and Federated Learning, by Ilya Mironov (Facebook) »
Ilya Mironov