Skip to yearly menu bar Skip to main content


Oral

Width Provably Matters in Optimization for Deep Linear Neural Networks

Simon Du · Wei Hu

Abstract:

We prove that for an L-layer fully-connected linear neural network, if the width of every hidden layer is \widetilde{Omega}( L r d{out} kappa^3 ), where r and kappa are the rank and the condition number of the input data, and d{out} is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an epsilon-suboptimal solution is O( kappa log(1/epsilon) ). Our polynomial upper bound on the total running time for wide deep linear networks and the exp(Omega(L)) lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.

Chat is not available.