Timezone: »
Can a neural network minimizing cross-entropy learn linearly separable data? Despite progress in the theory of deep learning, this question remains unsolved. Here we prove that SGD globally optimizes this learning problem for a two-layer network with Leaky ReLU activations. The learned network can in principle be very complex. However, empirical evidence suggests that it often turns out to be approximately linear. We provide theoretical support for this phenomenon by proving that if network weights converge to two weight clusters, this will imply an approximately linear decision boundary. Finally, we show a condition on the optimization that leads to weight clustering. We provide empirical results that validate our theoretical analysis.
Author Information
Roei Sarussi (Tel Aviv University)
Alon Brutzkus (Tel Aviv University)
Amir Globerson (Tel Aviv University, Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Towards Understanding Learning in Neural Networks with Linear Teachers »
Tue. Jul 20th 01:25 -- 01:30 PM Room
More from the Same Authors
-
2022 Poster: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2022 Spotlight: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2021 Poster: On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent »
Shahar Azulay · Edward Moroshko · Mor Shpigel Nacson · Blake Woodworth · Nati Srebro · Amir Globerson · Daniel Soudry -
2021 Oral: On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent »
Shahar Azulay · Edward Moroshko · Mor Shpigel Nacson · Blake Woodworth · Nati Srebro · Amir Globerson · Daniel Soudry -
2021 Poster: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2019 Poster: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Oral: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Poster: Low Latency Privacy Preserving Inference »
Alon Brutzkus · Ran Gilad-Bachrach · Oren Elisha -
2019 Oral: Low Latency Privacy Preserving Inference »
Alon Brutzkus · Ran Gilad-Bachrach · Oren Elisha -
2018 Poster: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Poster: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2018 Oral: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Oral: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2017 Poster: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Poster: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson -
2017 Talk: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Talk: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson