Talk
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massouli√©

Wed Aug 9th 11:06 -- 11:24 AM @ Parkside 2

In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappag}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappag$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappal}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappal$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.

Author Information

Kevin Scaman (MSR-INRIA Joint Center)
Francis Bach (INRIA)
Sebastien Bubeck (Microsoft Research)
Yin Tat Lee (Microsoft Research)
Laurent Massoulié (MSR-INRIA Joint Center)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors