Skip to yearly menu bar Skip to main content


Poster

Loss Landscapes of Regularized Linear Autoencoders

Daniel Kunin · Jonathan Bloom · Aleksandrina Goeva · Cotton Seed

Pacific Ballroom #26

Keywords: [ Representation Learning ] [ Matrix Factorization ] [ Generative Models ] [ Dimensionality Reduction ] [ Deep Learning Theory ]


Abstract: Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that $L_2$-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.

Live content is unavailable. Log in and register to view live content