One of the most widely used optimization methodsfor large-scale machine learning problemsis distributed asynchronous stochastic gradientdescent (DASGD). However, a key issue thatarises here is that of delayed gradients: when a‚Äúworker‚Äù node asynchronously contributes a gradientupdate to the ‚Äúmaster‚Äù, the global modelparameter may have changed, rendering this informationstale. In massively parallel computinggrids, these delays can quickly add up if the computationalthroughput of a node is saturated, sothe convergence of DASGD is uncertain underthese conditions. Nevertheless, by using a judiciouslychosen quasilinear step-size sequence, weshow that it is possible to amortize these delaysand achieve global convergence with probability1, even when the delays grow at a polynomialrate. In this way, our results help reaffirm thesuccessful application of DASGD to large-scaleoptimization problems.
( events) Timezone: »
Thu Jul 12 02:50 AM -- 03:00 AM (PDT) @ A9
Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?