Timezone: »
Spotlight
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi
Wed Jul 20 02:45 PM  02:50 PM (PDT) @ Room 307
In nonprivate stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problemsnamely, problems where there exists a solution that simultaneously minimizes all of the sample lossesthan on noninterpolating ones;similar improvements are not known in the private setting. In this paper, we investigate differentially private stochastic optimization in the interpolation regime. First, we show that without additional assumptions, interpolation problems do not exhibit an improved convergence rates with differential privacy. However, when the functions exhibit quadratic growth around the optimum, we show (near) exponential improvements in the private sample complexity. In particular, we propose an adaptive algorithm that improves the sample complexity to achieve expected error $\alpha$ from $\frac{d}{\diffp \sqrt{\alpha}}$ to $\frac{1}{\alpha^\rho} + \frac{d}{\diffp} \log\paren{\frac{1}{\alpha}}$ for any fixed $\rho >0$, while retaining the standard minimaxoptimal sample complexity for noninterpolation problems. We prove a lower bound that shows the dimensiondependent term in the expression above is tight. Furthermore, we provide a superefficiency result which demonstrates the necessity of the polynomial term for adaptive algorithms: any algorithm that has a polylogarithmic sample complexity for interpolation problems cannot achieve the minimaxoptimal rates for the family of noninterpolation problems.
Author Information
Hilal Asi (Stanford University)
Karan Chadha (Stanford University)
Gary Cheng (Stanford University)
John Duchi (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)

2022 Poster: Private optimization in the interpolation regime: faster rates and hardness results »
Wed. Jul 20th through Thu the 21st Room Hall E #1011
More from the Same Authors

2021 : Adapting to function difficulty and growth conditions in private optimization »
Hilal Asi · Daniel A Levy · John Duchi 
2023 : Differentially Private Heavy Hitters using Federated Analytics »
Karan Chadha · Junye Chen · John Duchi · Vitaly Feldman · Hanieh Hashemi · Omid Javidbakht · Audra McMillan · Kunal Talwar 
2022 Poster: Accelerated, Optimal and Parallel: Some results on modelbased stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi 
2022 Poster: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar 
2022 Oral: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar 
2022 Spotlight: Accelerated, Optimal and Parallel: Some results on modelbased stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi 
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar 
2021 Poster: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar 
2021 Spotlight: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar 
2021 Oral: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar 
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang 
2020 Poster: FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis »
Aman Sinha · Matthew O'Kelly · Hongrui Zheng · Rahul Mangharam · John Duchi · Russ Tedrake 
2017 Poster: Adaptive Sampling Probabilities for NonSmooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi 
2017 Poster: “Convex Until Proven Guilty”: DimensionFree Acceleration of Gradient Descent on NonConvex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford 
2017 Talk: “Convex Until Proven Guilty”: DimensionFree Acceleration of Gradient Descent on NonConvex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford 
2017 Talk: Adaptive Sampling Probabilities for NonSmooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi