Timezone: »
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and show that the bounds are (worst-case) optimal. As a consequence of our development, we show that our private versions of AdaGrad outperform adaptive SGD, which in turn outperforms traditional SGD in scenarios with non-isotropic gradients where (non-private) Adagrad provably outperforms SGD. The major challenge is that the isotropic noise typically added for privacy dominates the signal in gradient geometry for high-dimensional problems; approaches to this that effectively optimize over lower-dimensional subspaces simply ignore the actual problems that varying gradient geometries introduce. In contrast, we study non-isotropic clipping and noise addition, developing a principled theoretical approach; the consequent procedures also enjoy significantly stronger empirical performance than prior approaches.
Author Information
Hilal Asi (Stanford University)
John Duchi (Stanford University)
Alireza Fallah (MIT)
Omid Javidbakht (Apple)
Kunal Talwar (Apple)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Fri. Jul 23rd 04:00 -- 06:00 AM Room Virtual
More from the Same Authors
-
2021 : Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar -
2021 : Adapting to function difficulty and growth conditions in private optimization »
Hilal Asi · Daniel A Levy · John Duchi -
2021 : Differential Secrecy for Distributed Data and Applications to Robust Differentially Secure Vector Summation »
Kunal Talwar -
2021 : Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling »
Vitaly Feldman · Audra McMillan · Kunal Talwar -
2021 : Mean Estimation with User-level Privacy under Data Heterogeneity »
Rachel Cummings · Vitaly Feldman · Audra McMillan · Kunal Talwar -
2021 : When Is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning? »
Gavin Brown · Mark Bun · Vitaly Feldman · Adam Smith · Kunal Talwar -
2021 : A Practitioners Guide to Differentially Private Convex Optimization »
Ryan McKenna · Hristo Paskov · Kunal Talwar -
2023 Poster: Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2022 : Low-Communication Algorithms for Private Federated Data Analysis »
Kunal Talwar -
2022 Poster: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2022 Poster: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Oral: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Spotlight: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2022 Poster: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2022 Spotlight: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2021 Poster: Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar -
2021 Poster: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2021 Oral: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2021 Spotlight: Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar -
2021 Poster: A Wasserstein Minimax Framework for Mixed Linear Regression »
Theo Diamandis · Yonina Eldar · Alireza Fallah · Farzan Farnia · Asuman Ozdaglar -
2021 Poster: Characterizing Structural Regularities of Labeled Data in Overparameterized Models »
Ziheng Jiang · Chiyuan Zhang · Kunal Talwar · Michael Mozer -
2021 Oral: A Wasserstein Minimax Framework for Mixed Linear Regression »
Theo Diamandis · Yonina Eldar · Alireza Fallah · Farzan Farnia · Asuman Ozdaglar -
2021 Oral: Characterizing Structural Regularities of Labeled Data in Overparameterized Models »
Ziheng Jiang · Chiyuan Zhang · Kunal Talwar · Michael Mozer -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang -
2020 Poster: FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis »
Aman Sinha · Matthew O'Kelly · Hongrui Zheng · Rahul Mangharam · John Duchi · Russ Tedrake -
2017 Poster: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi -
2017 Poster: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi