Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

CD-GraB: Coordinating Distributed Example Orders for Provably Accelerated Training

A. Feder Cooper · Wentao Guo · Duc Khiem Pham · Tiancheng Yuan · Charlie Ruan · Yucheng Lu · Chris De Sa


Abstract:

Recent research on online Gradient Balancing (GraB) reveals that there exist permutation-based data example orders that are guaranteed to outperform random reshuffling (RR). Whereas RR arbitrarily permutes training data examples, GraB leverages information in stale example gradients from prior epochs to order examples for the next epoch --- achieving a provably faster convergence rate than RR. However, GraB is limited by design: While it demonstrates an impressive ability to scale-up training on \emph{centralized} data, it does not naturally extend to modern \emph{distributed} ML workloads. We therefore propose \emph{Coordinated Distributed GraB} (CD-GraB), which uses insights from prior work on kernel thinning to translate the benefits of provably faster permutation-based example ordering to distributed settings. With negligible overhead, CD-GraB exhibits a linear speedup in convergence rate over centralized GraB and outperforms baselines empirically, including distributed RR, on a variety of benchmark tasks.

Chat is not available.