Skip to yearly menu bar Skip to main content


Spotlight

A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms

xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia

Hall G
[ ] [ Livestream: Visit Applications/Optimization ]

Abstract:

In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms---including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning---can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.

Chat is not available.