Timezone: »
In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms---including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning---can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.
Author Information
xinwei zhang (University of Minnesota)
Mingyi Hong (University of Minnesota)
Sairaj Dhople (University of Minnesota)
Nicola Elia (University of Minnesota)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms »
Thu. Jul 21st 08:45 -- 08:50 PM Room Hall G
More from the Same Authors
-
2021 : Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang -
2021 : Understanding Clipped FedAvg: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Steven Wu · Mingyi Hong -
2022 : Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees »
Siliang Zeng · Chenliang Li · Alfredo Garcia · Mingyi Hong -
2023 : Robust Inverse Reinforcement Learning Through Bayesian Theory of Mind »
Ran Wei · Siliang Zeng · Chenliang Li · Alfredo Garcia · Anthony McDonald · Mingyi Hong -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Understanding Backdoor Attacks through the Adaptability Hypothesis »
Xun Xian · Ganghua Wang · Jayanth Srinivasa · Ashish Kundu · Xuan Bi · Mingyi Hong · Jie Ding -
2023 Poster: FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks »
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong -
2022 Poster: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2021 Spotlight: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2021 Poster: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2020 : Lightning Talks Session 2 »
Jichan Chung · Saurav Prakash · Mikhail Khodak · Ravi Rahman · Vaikkunth Mugunthan · xinwei zhang · Hossein Hosseini -
2020 Poster: Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: Joint Gradient Estimation and Tracking »
Haoran Sun · Songtao Lu · Mingyi Hong -
2020 Poster: Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks »
Sijia Liu · Songtao Lu · Xiangyi Chen · Yao Feng · Kaidi Xu · Abdullah Al-Dujaili · Mingyi Hong · Una-May O'Reilly -
2019 Poster: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2019 Oral: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2018 Poster: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee -
2018 Oral: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee