Session
Deep Learning (Theory) 6
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
Itay Safran · Ohad Shamir
We consider the optimization problem associated with training simple ReLU neural networks of the form $\mathbf{x}\mapsto \sum_{i=1}^{k}\max\{0,\mathbf{w}_i^\top \mathbf{x}\}$ with respect to the squared loss. We provide a computer-assisted proof that even if the input distribution is standard Gaussian, even if the dimension is arbitrarily large, and even if the target values are generated by such a network, with orthonormal parameter vectors, the problem can still have spurious local minima once $6\le k\le 20$. By a concentration of measure argument, this implies that in high input dimensions, \emph{nearly all} target networks of the relevant sizes lead to spurious local minima. Moreover, we conduct experiments which show that the probability of hitting such local minima is quite high, and increasing with the network size. On the positive side, mild over-parameterization appears to drastically reduce such local minima, indicating that an over-parameterization assumption is necessary to get a positive result in this setting.
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter Bartlett · Dave Helmbold · Phil Long
We analyze algorithms for approximatinga function$f(x) = \Phi x$mapping $\Re^d$ to $\Re^d$ using deep linearneural networks, i.e.\ that learn a function $h$ parameterizedby matrices $\Theta_1,...,\Theta_L$ and defined by$h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focuson algorithms that learn through gradient descent on the populationquadratic loss in the case that the distribution over the inputs isisotropic. We provide polynomial bounds on the number ofiterations for gradient descent to approximate theleast squares matrix $\Phi$, in the case wherethe initial hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small enoughconstant. On the other hand,we show that gradient descent fails to converge for$\Phi$ whose distance from the identityis a larger constant, and we show that some formsof regularization toward the identity in each layer donot help. If $\Phi$ is symmetric positive definite,we show that an algorithm that initializes $\Theta_i = I$learns an $\epsilon$-approximation of $f$ using a number of updates polynomial in $L$,the condition number of $\Phi$, and $\log(d/\epsilon)$. In contrast, we showthat if the least squares matrix $\Phi$ is symmetric and has anegative eigenvalue, then all members of a class of algorithmsthat perform gradient descent with identity initialization,and optionally regularize toward the identity in each layer, fail toconverge. We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top}\Phi u > 0$ for all $u$, but may not be symmetric. This algorithm usestwo regularizers: one that maintains the invariant $u^{\top} \Theta_L\Theta_{L-1} ... \Theta_1 u > 0$ for all $u$, and another that ``balances''$\Theta_1, ..., \Theta_L$ so that they have the same singular values.
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
Simon Du · Jason Lee
We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a \emph{globally} optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.
Optimization Landscape and Expressivity of Deep CNNs
Quynh Nguyen · Matthias Hein
We analyze the loss landscape and expressiveness of practical deep convolutional neural networks (CNNs) with shared weights and max pooling layers. We show that such CNNs produce linearly independent features at a ``wide'' layer which has more neurons than the number of training samples. This condition holds e.g. for the VGG network. Furthermore, we provide for such wide CNNs necessary and sufficient conditions for global minima with zero training error. For the case where the wide layer is followed by a fully connected layer we show that almost every critical point of the empirical loss is a global minimum with zero training error. Our analysis suggests that both depth and width are very important in deep learning. While depth brings more representational power and allows the network to learn high level features, width smoothes the optimization landscape of the loss function in the sense that a sufficiently wide network has a well-behaved loss surface with almost no bad local minima.