Timezone: »
Talk
Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition
Zeyuan Allen-Zhu · Yuanzhi Li
We study k-GenEV, the problem of finding the top k generalized eigenvectors, and k-CCA, the problem of finding the top k vectors in canonical-correlation analysis. We propose algorithms LazyEV and LazyCCA to solve the two problems with running times linearly dependent on the input size and on k.
Furthermore, our algorithms are \emph{doubly-accelerated}: our running times depend only on the square root of the matrix condition number, and on the square root of the eigengap. This is the first such result for both k-GenEV or k-CCA. We also provide the first gap-free results, which provide running times that depend on 1/\sqrt{\varepsilon}$ rather than the eigengap.
Author Information
Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS)
Yuanzhi Li (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #68
More from the Same Authors
-
2019 Poster: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2019 Oral: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2018 Poster: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Oral: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Poster: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Poster: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2018 Poster: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Oral: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2017 Poster: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Talk: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Poster: Faster Principal Component Regression and Stable Matrix Chebyshev Approximation »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Faster Principal Component Regression and Stable Matrix Chebyshev Approximation »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Talk: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Talk: Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Tutorial: Recent Advances in Stochastic Convex and Non-Convex Optimization »
Zeyuan Allen-Zhu