Skip to yearly menu bar Skip to main content


Poster

Eliminating the Invariance on the Loss Landscape of Linear Autoencoders

Reza Oftadeh · Jiayi Shen · Zhangyang “Atlas” Wang · Dylan Shell

Virtual

Keywords: [ Deep Learning - Theory ] [ Autoencoders ] [ Optimization ] [ Deep Learning Theory ]


Abstract:

This paper proposes a new loss function for linear autoencoders (LAEs) and analytically identifies the structure of the associated loss surface. Optimizing the conventional Mean Square Error (MSE) loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but, owing to an invariance that cancels out in the global map, it will fail to identify the exact eigenvectors. We show here that our proposed loss function eliminates this issue, so the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We characterize the full structure of the new loss landscape by establishing an analytical expression for the set of all critical points, showing that it is a subset of critical points of MSE, and that all local minima are still global. Specifically, the invariant global minima under MSE are shown to become saddle points under the new loss. Additionally, the computational complexity of the loss and its gradients are the same as MSE and, thus, the new loss is not only of theoretical importance but is of practical value, e.g., for low-rank approximation.

Chat is not available.