Timezone: »

Towards Understanding Learning in Neural Networks with Linear Teachers
Roei Sarussi · Alon Brutzkus · Amir Globerson

Tue Jul 20 06:25 AM -- 06:30 AM (PDT) @ None

Can a neural network minimizing cross-entropy learn linearly separable data? Despite progress in the theory of deep learning, this question remains unsolved. Here we prove that SGD globally optimizes this learning problem for a two-layer network with Leaky ReLU activations. The learned network can in principle be very complex. However, empirical evidence suggests that it often turns out to be approximately linear. We provide theoretical support for this phenomenon by proving that if network weights converge to two weight clusters, this will imply an approximately linear decision boundary. Finally, we show a condition on the optimization that leads to weight clustering. We provide empirical results that validate our theoretical analysis.

Author Information

Roei Sarussi (Tel Aviv University)
Alon Brutzkus (Tel Aviv University)
Amir Globerson (Tel Aviv University, Google)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors