Timezone: »

 
Poster
Is Local SGD Better than Minibatch SGD?
Blake Woodworth · Kumar Kshitij Patel · Sebastian Stich · Zhen Dai · Brian Bullins · Brendan McMahan · Ohad Shamir · Nati Srebro

Tue Jul 14 08:00 AM -- 08:45 AM & Tue Jul 14 07:00 PM -- 07:45 PM (PDT) @

We study local SGD (also known as parallel SGD and federated SGD), a natural and frequently used distributed optimization method. Its theoretical foundations are currently lacking and we highlight how all existing error guarantees in the convex setting are dominated by a simple baseline, minibatch SGD. (1) For quadratic objectives we prove that local SGD strictly dominates minibatch SGD and that accelerated local SGD is minmax optimal for quadratics; (2) For general convex objectives we provide the first guarantee that at least \emph{sometimes} improves over minibatch SGD, but our guarantee does not always improve over, nor even match, minibatch SGD; (3) We show that indeed local SGD does \emph{not} dominate minibatch SGD by presenting a lower bound on the performance of local SGD that is worse than the minibatch SGD guarantee.

Author Information

Blake Woodworth (Toyota Technological Institute at Chicago)
Kumar Kshitij Patel (Toyota Technological Institute at Chicago)
Sebastian Stich (EPFL)
Zhen Dai (University of Chicago)
Brian Bullins (TTI Chicago)
Brendan McMahan (Google)
Ohad Shamir (Weizmann Institute of Science)
Nati Srebro (Toyota Technological Institute at Chicago)

More from the Same Authors