Skip to yearly menu bar Skip to main content


Spotlight

Federated Learning under Arbitrary Communication Patterns

Dmitrii Avdiukhin · Shiva Kasiviswanathan

[ ] [ Livestream: Visit Optimization (Distributed) ] [ Paper ]
[ Paper ]

Abstract:

Federated Learning is a distributed learning setting where the goal is to train a centralized model with training data distributed over a large number of heterogeneous clients, each with unreliable and relatively slow network connections. A common optimization approach used in federated learning is based on the idea of local SGD: each client runs some number of SGD steps locally and then the updated local models are averaged to form the updated global model on the coordinating server. In this paper, we investigate the performance of an asynchronous version of local SGD wherein the clients can communicate with the server at arbitrary time intervals. Our main result shows that for smooth strongly convex and smooth nonconvex functions we achieve convergence rates that match the synchronous version that requires all clients to communicate simultaneously.

Chat is not available.