Timezone: »
The accelerated proximal point method (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing max_y f(x,y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.
Author Information
Yair Carmon (Tel Aviv University)
Arun Jambulapati (Stanford)
Yujia Jin (Stanford University)
Aaron Sidford (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: RECAPP: Crafting a More Efficient Catalyst for Convex Optimization »
Wed. Jul 20th through Thu the 21st Room Hall E #714
More from the Same Authors
-
2021 : On the Sample Complexity of Average-reward MDPs »
Yujia Jin -
2023 Poster: DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule »
Maor Ivgi · Oliver Hinder · Yair Carmon -
2023 Poster: Quantum Speedups for Zero-Sum Games via Improved Dynamic Gibbs Sampling »
Adam Bouland · Yosheb Getachew · Yujia Jin · Aaron Sidford · Kevin Tian -
2023 Poster: Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond »
Itai Kreisler · Mor Shpigel Nacson · Daniel Soudry · Yair Carmon -
2022 Poster: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time »
Mitchell Wortsman · Gabriel Ilharco · Samir Gadre · Becca Roelofs · Raphael Gontijo Lopes · Ari Morcos · Hongseok Namkoong · Ali Farhadi · Yair Carmon · Simon Kornblith · Ludwig Schmidt -
2022 Spotlight: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time »
Mitchell Wortsman · Gabriel Ilharco · Samir Gadre · Becca Roelofs · Raphael Gontijo Lopes · Ari Morcos · Hongseok Namkoong · Ali Farhadi · Yair Carmon · Simon Kornblith · Ludwig Schmidt -
2021 Poster: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Poster: Towards Tight Bounds on the Sample Complexity of Average-reward MDPs »
Yujia Jin · Aaron Sidford -
2021 Spotlight: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Spotlight: Towards Tight Bounds on the Sample Complexity of Average-reward MDPs »
Yujia Jin · Aaron Sidford -
2020 Poster: Efficiently Solving MDPs with Stochastic Mirror Descent »
Yujia Jin · Aaron Sidford -
2017 Poster: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford