Optimal Complexity in Decentralized Training

Yucheng Lu · Christopher De Sa

Keywords: [ Distributed and Parallel Optimization ]

award Outstanding Paper Honorable Mention
[ Abstract ]
[ Paper ]
[ Visit Poster at Spot A5 in Virtual World ]


Decentralization is a promising method of scaling up parallel machine learning systems. In this paper, we provide a tight lower bound on the iteration complexity for such methods in a stochastic non-convex setting. Our lower bound reveals a theoretical gap in known convergence rates of many existing decentralized training algorithms, such as D-PSGD. We prove by construction this lower bound is tight and achievable. Motivated by our insights, we further propose DeTAG, a practical gossip-style decentralized algorithm that achieves the lower bound with only a logarithm gap. Empirically, we compare DeTAG with other decentralized algorithms on image classification tasks, and we show DeTAG enjoys faster convergence compared to baselines, especially on unshuffled data and in sparse networks.

Chat is not available.