Timezone: »
Poster
Stochastic PCA with $\ell_2$ and $\ell_1$ Regularization
Poorya Mianjy · Raman Arora
We revisit convex relaxation based methods for stochastic optimization of principal component analysis (PCA). While methods that directly solve the nonconvex problem have been shown to be optimal in terms of statistical and computational efficiency, the methods based on convex relaxation have been shown to enjoy comparable, or even superior, empirical performance -- this motivates the need for a deeper formal understanding of the latter. Therefore, in this paper, we study variants of stochastic gradient descent for a convex relaxation of PCA with (a) $\ell_2$, (b) $\ell_1$, and (c) elastic net ($\ell_1+\ell_2)$ regularization in the hope that these variants yield (a) better iteration complexity, (b) better control on the rank of the intermediate iterates, and (c) both, respectively. We show, theoretically and empirically, that compared to previous work on convex relaxation based methods, the proposed variants yield faster convergence and improve overall runtime to achieve a certain user-specified $\epsilon$-suboptimality on the PCA objective. Furthermore, the proposed methods are shown to converge both in terms of the PCA objective as well as the distance between subspaces. However, there still remains a gap in computational requirements for the proposed methods when compared with existing nonconvex approaches.
Author Information
Poorya Mianjy (Johns Hopkins University)
Raman Arora (Johns Hopkins University)

Raman Arora received his M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Wisconsin-Madison in 2005 and 2009, respectively. From 2009-2011, he was a Postdoctoral Research Associate at the University of Washington in Seattle and a Visiting Researcher at Microsoft Research Redmond. Since 2011, he has been with Toyota Technological Institute at Chicago (TTIC). His research interests include machine learning, speech recognition and statistical signal processing.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Stochastic PCA with $\ell_2$ and $\ell_1$ Regularization »
Wed. Jul 11th 12:10 -- 12:20 PM Room K11
More from the Same Authors
-
2023 Poster: Faster Rates of Convergence to Stationary Points in Differentially Private Optimization »
Raman Arora · Raef Bassily · Tomás González · Cristobal Guzman · Michael Menart · Enayat Ullah -
2023 Poster: From Adaptive Query Release to Machine Unlearning »
Enayat Ullah · Raman Arora -
2021 Poster: Robust Learning for Data Poisoning Attacks »
Yunjuan Wang · Poorya Mianjy · Raman Arora -
2021 Spotlight: Robust Learning for Data Poisoning Attacks »
Yunjuan Wang · Poorya Mianjy · Raman Arora -
2021 Poster: Dropout: Explicit Forms and Capacity Control »
Raman Arora · Peter Bartlett · Poorya Mianjy · Nati Srebro -
2021 Spotlight: Dropout: Explicit Forms and Capacity Control »
Raman Arora · Peter Bartlett · Poorya Mianjy · Nati Srebro -
2020 Poster: FetchSGD: Communication-Efficient Federated Learning with Sketching »
Daniel Rothchild · Ashwinee Panda · Enayat Ullah · Nikita Ivkin · Ion Stoica · Vladimir Braverman · Joseph E Gonzalez · Raman Arora -
2019 Poster: On Dropout and Nuclear Norm Regularization »
Poorya Mianjy · Raman Arora -
2019 Oral: On Dropout and Nuclear Norm Regularization »
Poorya Mianjy · Raman Arora -
2018 Poster: On the Implicit Bias of Dropout »
Poorya Mianjy · Raman Arora · Rene Vidal -
2018 Oral: On the Implicit Bias of Dropout »
Poorya Mianjy · Raman Arora · Rene Vidal -
2018 Poster: Streaming Principal Component Analysis in Noisy Setting »
Teodor Vanislavov Marinov · Poorya Mianjy · Raman Arora -
2018 Oral: Streaming Principal Component Analysis in Noisy Setting »
Teodor Vanislavov Marinov · Poorya Mianjy · Raman Arora