Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Understanding and Improving Generalization in Deep Learning

How Learning Rate and Delay Affect Minima Selection in AsynchronousTraining of Neural Networks: Toward Closing the Generalization Gap

[ ]
[ Video
2019 Spotlight

Abstract:

Authors: Niv Giladi, Mor Shpigel Nacson, Elad Hoffer and Daniel Soudry

Abstract: Background: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods. Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the hyperparameters could be changed while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be decreased inversely with the delay, and discuss the effect of momentum. We provide empirical experiments to validate our theoretical findings

Chat is not available.