Skip to yearly menu bar Skip to main content


Poster

Towards Understanding Learning in Neural Networks with Linear Teachers

Roei Sarussi · Alon Brutzkus · Amir Globerson

Keywords: [ MCMC ] [ Theory ] [ Probabilistic Methods ]


Abstract:

Can a neural network minimizing cross-entropy learn linearly separable data? Despite progress in the theory of deep learning, this question remains unsolved. Here we prove that SGD globally optimizes this learning problem for a two-layer network with Leaky ReLU activations. The learned network can in principle be very complex. However, empirical evidence suggests that it often turns out to be approximately linear. We provide theoretical support for this phenomenon by proving that if network weights converge to two weight clusters, this will imply an approximately linear decision boundary. Finally, we show a condition on the optimization that leads to weight clustering. We provide empirical results that validate our theoretical analysis.

Chat is not available.