Poster
in
Workshop: HiLD: High-dimensional Learning Dynamics Workshop
Implicitly Learned Invariance and Equivariance in Linear Regression
Yonatan Gideoni
Abstract:
Can deep learning models generalize if their problem's underlying structure is unknown a priori? We analyze this theoretically and empirically in an idealistic setting for linear regression with invariant/equivariant data. We prove that linear regression models learn to become invariant/equivariant, with their weights being decomposed into a component that respects the symmetry and one that does not. These two components evolve independently over time, with the asymmetric component decaying exponentially given sufficient data. Extending these results to complex systems will be pursued in future work.
Chat is not available.