Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #103
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
Simon Du · Jason Lee · Yuandong Tian · Aarti Singh · Barnabás Póczos
[ PDF
We consider the problem of learning an one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation function, i.e., $f(Z; w, a) = \sum_j a_j\sigma(w^\top Z_j)$, in which both the convolutional weights $w$ and the output weights $a$ are parameters to be learned. We prove that with Gaussian input $\vZ$, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, starting from randomly initialized weights, gradient descent with weight normalization can still be proven to recover the true parameters with constant probability (which can be boosted to probability $1$ with multiple restarts). We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.