Poster
Width Provably Matters in Optimization for Deep Linear Neural Networks
Simon Du · Wei Hu
Pacific Ballroom #94
Keywords: [ Deep Learning Theory ] [ Non-convex Optimization ]
[
Abstract
]
Abstract:
We prove that for an -layer fully-connected linear neural network, if the width of every hidden layer is , where and are the rank and the condition number of the input data, and is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an -suboptimal solution is . Our polynomial upper bound on the total running time for wide deep linear networks and the lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.
Live content is unavailable. Log in and register to view live content