Timezone: »
Oral
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter Bartlett · Dave Helmbold · Phil Long
We analyze algorithms for approximatinga function$f(x) = \Phi x$mapping $\Re^d$ to $\Re^d$ using deep linearneural networks, i.e.\ that learn a function $h$ parameterizedby matrices $\Theta_1,...,\Theta_L$ and defined by$h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focuson algorithms that learn through gradient descent on the populationquadratic loss in the case that the distribution over the inputs isisotropic. We provide polynomial bounds on the number ofiterations for gradient descent to approximate theleast squares matrix $\Phi$, in the case wherethe initial hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small enoughconstant. On the other hand,we show that gradient descent fails to converge for$\Phi$ whose distance from the identityis a larger constant, and we show that some formsof regularization toward the identity in each layer donot help. If $\Phi$ is symmetric positive definite,we show that an algorithm that initializes $\Theta_i = I$learns an $\epsilon$-approximation of $f$ using a number of updates polynomial in $L$,the condition number of $\Phi$, and $\log(d/\epsilon)$. In contrast, we showthat if the least squares matrix $\Phi$ is symmetric and has anegative eigenvalue, then all members of a class of algorithmsthat perform gradient descent with identity initialization,and optionally regularize toward the identity in each layer, fail toconverge. We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top}\Phi u > 0$ for all $u$, but may not be symmetric. This algorithm usestwo regularizers: one that maintains the invariant $u^{\top} \Theta_L\Theta_{L-1} ... \Theta_1 u > 0$ for all $u$, and another that ``balances''$\Theta_1, ..., \Theta_L$ so that they have the same singular values.
Author Information
Peter Bartlett (UC Berkeley)
Dave Helmbold
Phil Long (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks »
Fri. Jul 13th 04:15 -- 07:00 PM Room Hall B #153
More from the Same Authors
-
2021 : Finite-Sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime »
Niladri Chatterji · Phil Long -
2021 : When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations? »
Niladri Chatterji · Phil Long · Peter Bartlett -
2021 : On the Theory of Reinforcement Learning with Once-per-Episode Feedback »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2023 Poster: Deep linear networks can benignly overfit when shallow ones do »
Niladri S. Chatterji · Phil Long -
2021 : On the Theory of Reinforcement Learning with Once-per-Episode Feedback »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2021 : Adversarial Examples in Random Deep Networks »
Peter Bartlett -
2020 Poster: On Thompson Sampling with Langevin Algorithms »
Eric Mazumdar · Aldo Pacchiano · Yian Ma · Michael Jordan · Peter Bartlett -
2020 Poster: Accelerated Message Passing for Entropy-Regularized MAP Inference »
Jonathan Lee · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2019 Poster: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko -
2019 Oral: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko -
2019 Oral: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Online learning with kernel losses »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett -
2019 Oral: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Online learning with kernel losses »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett -
2018 Poster: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Oral: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Poster: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2018 Oral: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2017 Poster: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon -
2017 Talk: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon