Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities
On Alignment in Deep Linear Neural Networks
Adityanarayanan Radhakrishnan · Eshaan Nichani · Daniel Bernstein · Caroline Uhler
We study the properties of alignment, a form of implicit regularization, in linear neural networks under gradient descent. We define alignment for fully connected networks with multidimensional outputs and show that it is a natural extension of alignment in networks with 1d outputs as defined in (Ji & Telgarsky, 2018). While in fully connected networks there always exists a global minimum corresponding to an aligned solution, we analyze alignment as it relates to the training process. Namely, we characterize when alignment is an invariant of training under gradient descent by providing necessary and sufficient conditions for this invariant to hold. In such settings, the dynamics of gradient descent simplify, thereby allowing us to provide an explicit learning rate under which the network converges linearly to a global minimum. We conclude by analyzing networks with layer constraints, such as convolutional networks, and prove that alignment is impossible with sufficiently large datasets.