Timezone: »
Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time.
Author Information
Alejandro Carderera (Georgia Institute of Technology)
I am a third year Ph.D. student in Machine Learning at the Georgia Institute of Technology, working with Prof. Sebastian Pokutta. My work is currently aimed at designing novel convex optimization algorithms with solid theoretical convergence guarantees and good numerical performance. Prior to joining the Ph.D. program I worked at HP as an R&D Systems Engineer for two years. I obtained a bachelor of science in Industrial Engineering from the Universidad Politécnica de Madrid and a Master of Science in Applied Physics from Cornell University.
Jelena Diakonikolas (University of Wisconsin-Madison)
Cheuk Yin Lin (University of Wisconsin–Madison)
Sebastian Pokutta (ZIB/TUB)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Parameter-free Locally Accelerated Conditional Gradients »
Tue. Jul 20th 01:30 -- 01:35 PM Room None
More from the Same Authors
-
2022 Poster: Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings »
Jan Macdonald · Mathieu Besançon · Sebastian Pokutta -
2022 Spotlight: Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings »
Jan Macdonald · Mathieu Besançon · Sebastian Pokutta -
2022 Poster: Sparser Kernel Herding with Pairwise Conditional Gradients without Swap Steps »
Kazuma Tsuji · Ken'ichiro Tanaka · Sebastian Pokutta -
2022 Spotlight: Sparser Kernel Herding with Pairwise Conditional Gradients without Swap Steps »
Kazuma Tsuji · Ken'ichiro Tanaka · Sebastian Pokutta -
2022 Poster: Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four »
Stephan Wäldchen · Sebastian Pokutta · Felix Huber -
2022 Oral: Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four »
Stephan Wäldchen · Sebastian Pokutta · Felix Huber -
2021 Poster: Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums »
Chaobing Song · Stephen Wright · Jelena Diakonikolas -
2021 Oral: Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums »
Chaobing Song · Stephen Wright · Jelena Diakonikolas -
2020 Poster: Boosting Frank-Wolfe by Chasing Gradients »
Cyrille W. Combettes · Sebastian Pokutta -
2020 Poster: IPBoost – Non-Convex Boosting via Integer Programming »
Marc Pfetsch · Sebastian Pokutta -
2020 Poster: On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness »
Sebastian Pokutta · Mohit Singh · Alfredo Torrico