Spotlight
in
Workshop: Understanding and Improving Generalization in Deep Learning
Towards Large Scale Structure of the Loss Landscape of Neural Networks
Abstract:
Authors: Stanislav Fort and Stanislaw Jastrzebski
Abstract: There are many surprising and perhaps counter-intuitive properties of optimization of deep neural networks. We propose and experimentally verify a unified phenomenological model of the loss landscape that incorporates many of them. Our core idea is to model the loss landscape as a set of high dimensional \emph{sheets} that together form a distributed, large-scale, inter-connected structure. For instance, we predict an existence of low loss subspaces connecting a set (not only a pair) of solutions, and verify it experimentally. We conclude by showing that hyperparameter choices such as learning rate, batch size, dropout and $L_2$ regularization, affect the path optimizer takes through the landscape in a similar way.
Chat is not available.