Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #191
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel · Adam Klivans · Raghu Meka
[ PDF
We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches under mild conditions on the underlying distribution. We prove that our framework captures commonly used schemes from computer vision, including one-dimensional and two-dimensional ``patch and stride'' convolutions. Our algorithm-- {\em Convotron}-- is inspired by recent work applying isotonic regression to learning neural networks. Convotron uses a simple, iterative update rule that is stochastic in nature and tolerant to noise (requires only that the conditional mean function is a one layer convolutional network, as opposed to the realizable setting). In contrast to gradient descent, Convotron requires no special initialization or learning-rate tuning to converge to the global optimum. We also point out that learning one hidden convolutional layer with respect to a Gaussian distribution and just {\em one} disjoint patch $P$ (the other patches may be arbitrary) is {\em easy} in the following sense: Convotron can efficiently recover the hidden weight vector by updating {\em only} in the direction of $P$.