Timezone: »
Poster
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Zeyuan Allen-Zhu · Yuanzhi Li
The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrix-multiplicative-weight-update (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the follow-the-perturbed-leader (FTPL) framework which is faster, but a factor $\sqrt{d}$ worse than the optimal regret for dimension-$d$ matrices.
We propose a \emph{follow-the-compressed-leader} framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL.
Our main idea is to ``compress'' the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.
Author Information
Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS)
Yuanzhi Li (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU »
Mon. Aug 7th 04:06 -- 04:24 AM Room C4.1
More from the Same Authors
-
2019 Poster: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2019 Oral: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2018 Poster: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Oral: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Poster: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Poster: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2018 Poster: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Oral: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2017 Poster: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Talk: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Poster: Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Faster Principal Component Regression and Stable Matrix Chebyshev Approximation »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Talk: Faster Principal Component Regression and Stable Matrix Chebyshev Approximation »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Talk: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Tutorial: Recent Advances in Stochastic Convex and Non-Convex Optimization »
Zeyuan Allen-Zhu