Skip to yearly menu bar Skip to main content


Poster

Stochastic Gradient Push for Distributed Deep Learning

Mahmoud Assran · Nicolas Loizou · Nicolas Ballas · Michael Rabbat

Pacific Ballroom #183

Keywords: [ Parallel and Distributed Learning ] [ Optimization ] [ Non-convex Optimization ] [ Large Scale Learning and Big Data ] [ Algorithms ]


Abstract:

Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus. We empirically validate the performance of SGP on image classification (ResNet-50, ImageNet) and machine translation (Transformer, WMT'16 En-De) workloads.

Live content is unavailable. Log in and register to view live content