Timezone: »

 
Oral
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran · Nicolas Loizou · Nicolas Ballas · Michael Rabbat

Wed Jun 12 11:25 AM -- 11:30 AM (PDT) @ Room 103

Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via All-Reduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, that all nodes achieve consensus, and that SGP achieves a linear speedup with respect to the number of compute nodes. Furthermore, we empirically validate the performance of SGP on image classification and machine translation workloads. Our code, attached to the submission, will be made publicly available.

Author Information

Mahmoud Assran (McGill University/Facebook AI Research)
Nicolas Loizou (The University of Edinburgh)

https://www.maths.ed.ac.uk/~s1461357/

Nicolas Ballas (Facebook FAIR)
Michael Rabbat (Facebook)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors