Timezone: »

A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms
xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #726

In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms---including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning---can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.

Author Information

xinwei zhang (University of Minnesota)
Mingyi Hong (University of Minnesota)
Sairaj Dhople (University of Minnesota)
Nicola Elia (University of Minnesota)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors