Timezone: »
Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.
Author Information
Daniel Kunin (Stanford University)
Jonathan Bloom (Broad Institute)
Aleksandrina Goeva (Broad Institute of MIT and Harvard)
Cotton Seed (Broad Institute of MIT and Harvard)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Loss Landscapes of Regularized Linear Autoencoders »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #26
More from the Same Authors
-
2020 Poster: Two Routes to Scalable Credit Assignment without Weight Symmetry »
Daniel Kunin · Aran Nayebi · Javier Sagastuy-Brena · Surya Ganguli · Jonathan Bloom · Daniel Yamins