Skip to yearly menu bar Skip to main content


Oral

Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?

Zhengyuan Zhou · Panayotis Mertikopoulos · Nicholas Bambos · Peter Glynn · Yinyu Ye · Li-Jia Li · Li Fei-Fei

Abstract:

One of the most widely used optimization methodsfor large-scale machine learning problemsis distributed asynchronous stochastic gradientdescent (DASGD). However, a key issue thatarises here is that of delayed gradients: when a“worker” node asynchronously contributes a gradientupdate to the “master”, the global modelparameter may have changed, rendering this informationstale. In massively parallel computinggrids, these delays can quickly add up if the computationalthroughput of a node is saturated, sothe convergence of DASGD is uncertain underthese conditions. Nevertheless, by using a judiciouslychosen quasilinear step-size sequence, weshow that it is possible to amortize these delaysand achieve global convergence with probability1, even when the delays grow at a polynomialrate. In this way, our results help reaffirm thesuccessful application of DASGD to large-scaleoptimization problems.

Chat is not available.