Timezone: »

 
Poster
Width Provably Matters in Optimization for Deep Linear Neural Networks
Simon Du · Wei Hu

Thu Jun 13 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #94
We prove that for an $L$-layer fully-connected linear neural network, if the width of every hidden layer is $\widetilde{\Omega}\left(L \cdot r \cdot d_{out} \cdot \kappa^3 \right)$, where $r$ and $\kappa$ are the rank and the condition number of the input data, and $d_{out}$ is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an $\epsilon$-suboptimal solution is $O(\kappa \log(\frac{1}{\epsilon}))$. Our polynomial upper bound on the total running time for wide deep linear networks and the $\exp\left(\Omega\left(L\right)\right)$ lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.

Author Information

Simon Du (Carnegie Mellon University)
Wei Hu (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors