Timezone: »
Poster
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation
Zeyuan Allen-Zhu · Yuanzhi Li
We solve principal component regression (PCR), up to a multiplicative accuracy $1+\gamma$, by reducing the problem to $\tilde{O}(\gamma^{-1})$ black-box calls of ridge regression. Therefore, our algorithm does not require any explicit construction of the top principal components, and is suitable for large-scale PCR instances. In contrast, previous result requires $\tilde{O}(\gamma^{-2})$ such black-box calls.
We obtain this result by developing a general stable recurrence formula for matrix Chebyshev polynomials, and a degree-optimal polynomial approximation to the matrix sign function. Our techniques may be of independent interests, especially when designing iterative methods.
Author Information
Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS)
Yuanzhi Li (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Faster Principal Component Regression and Stable Matrix Chebyshev Approximation »
Tue. Aug 8th 12:48 -- 01:06 AM Room C4.4
More from the Same Authors
-
2019 Poster: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2019 Oral: A Convergence Theory for Deep Learning via Over-Parameterization »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2018 Poster: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Oral: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Poster: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Poster: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2018 Poster: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: The Well-Tempered Lasso »
Yuanzhi Li · Yoram Singer -
2018 Oral: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Oral: Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization »
Zeyuan Allen-Zhu -
2017 Poster: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Talk: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Poster: Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter »
Zeyuan Allen-Zhu -
2017 Talk: Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Poster: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Talk: Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations »
Yuanzhi Li · Yingyu Liang -
2017 Talk: Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU »
Zeyuan Allen-Zhu · Yuanzhi Li -
2017 Tutorial: Recent Advances in Stochastic Convex and Non-Convex Optimization »
Zeyuan Allen-Zhu