Timezone: »
Spotlight
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi
Wed Jul 20 02:45 PM -- 02:50 PM (PDT) @ Room 307
In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems---namely, problems where there exists a solution that simultaneously minimizes all of the sample losses---than on non-interpolating ones;similar improvements are not known in the private setting. In this paper, we investigate differentially private stochastic optimization in the interpolation regime. First, we show that without additional assumptions, interpolation problems do not exhibit an improved convergence rates with differential privacy. However, when the functions exhibit quadratic growth around the optimum, we show (near) exponential improvements in the private sample complexity. In particular, we propose an adaptive algorithm that improves the sample complexity to achieve expected error $\alpha$ from $\frac{d}{\diffp \sqrt{\alpha}}$ to $\frac{1}{\alpha^\rho} + \frac{d}{\diffp} \log\paren{\frac{1}{\alpha}}$ for any fixed $\rho >0$, while retaining the standard minimax-optimal sample complexity for non-interpolation problems. We prove a lower bound that shows the dimension-dependent term in the expression above is tight. Furthermore, we provide a superefficiency result which demonstrates the necessity of the polynomial term for adaptive algorithms: any algorithm that has a polylogarithmic sample complexity for interpolation problems cannot achieve the minimax-optimal rates for the family of non-interpolation problems.
Author Information
Hilal Asi (Stanford University)
Karan Chadha (Stanford University)
Gary Cheng (Stanford University)
John Duchi (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Private optimization in the interpolation regime: faster rates and hardness results »
Wed. Jul 20th through Thu the 21st Room Hall E #1011
More from the Same Authors
-
2021 : Adapting to function difficulty and growth conditions in private optimization »
Hilal Asi · Daniel A Levy · John Duchi -
2023 : Differentially Private Heavy Hitters using Federated Analytics »
Karan Chadha · Junye Chen · John Duchi · Vitaly Feldman · Hanieh Hashemi · Omid Javidbakht · Audra McMillan · Kunal Talwar -
2022 Poster: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2022 Poster: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Oral: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Spotlight: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Poster: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2021 Spotlight: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Oral: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang -
2020 Poster: FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis »
Aman Sinha · Matthew O'Kelly · Hongrui Zheng · Rahul Mangharam · John Duchi · Russ Tedrake -
2017 Poster: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi -
2017 Poster: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi