Timezone: »
Poster
Decentralized SGD and Average-direction SAM are Asymptotically Equivalent
Tongtian Zhu · Fengxiang He · Kaixuan Chen · Mingli Song · Dacheng Tao
Event URL: https://github.com/Raiden-Zhu/ICML-2023-DSGD-and-SAM »
Decentralized stochastic gradient descent (D-SGD) allows collaborative learning on massive devices simultaneously without the control of a central server. However, existing theories claim that decentralization invariably undermines generalization. In this paper, we challenge the conventional belief and present a completely new perspective for understanding decentralized learning. We prove that D-SGD implicitly minimizes the loss function of an average-direction Sharpness-aware minimization (SAM) algorithm under general non-convex non-$\beta$-smooth settings. This surprising asymptotic equivalence reveals an intrinsic regularization-optimization trade-off and three advantages of decentralization: (1) there exists a free uncertainty evaluation mechanism in D-SGD to improve posterior estimation; (2) D-SGD exhibits a gradient smoothing effect; and (3) the sharpness regularization effect of D-SGD does not decrease as total batch size increases, which justifies the potential generalization benefit of D-SGD over centralized SGD (C-SGD) in large-batch scenarios.
Decentralized stochastic gradient descent (D-SGD) allows collaborative learning on massive devices simultaneously without the control of a central server. However, existing theories claim that decentralization invariably undermines generalization. In this paper, we challenge the conventional belief and present a completely new perspective for understanding decentralized learning. We prove that D-SGD implicitly minimizes the loss function of an average-direction Sharpness-aware minimization (SAM) algorithm under general non-convex non-$\beta$-smooth settings. This surprising asymptotic equivalence reveals an intrinsic regularization-optimization trade-off and three advantages of decentralization: (1) there exists a free uncertainty evaluation mechanism in D-SGD to improve posterior estimation; (2) D-SGD exhibits a gradient smoothing effect; and (3) the sharpness regularization effect of D-SGD does not decrease as total batch size increases, which justifies the potential generalization benefit of D-SGD over centralized SGD (C-SGD) in large-batch scenarios.
Author Information
Tongtian Zhu (Zhejiang University)
Fengxiang He (University of Edinburgh)
Fengxiang He is a Lecturer at Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh. He received his BSc in statistics from University of Science and Technology of China, MPhil and PhD in computer science from University of Sydney. He was an Algorithm Scientist at JD Explore Academy, JD.com, Inc., leading its trustworthy AI team. His research interest is in the theory and practice of trustworthy AI, including deep learning theory, privacy-preserving machine learning, algorithmic game theory, etc., as well as applications in finance and economics. He is an Area Chair of UAI, AISTATS, and ACML.
Kaixuan Chen (Zhejiang University)
Mingli Song (Zhejiang University)
Dacheng Tao
More from the Same Authors
-
2023 : Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning »
Guozheng Ma · · Haoyu Wang · Lu Li · Zilin Wang · Zhen Wang · Li Shen · Xueqian Wang · Dacheng Tao -
2023 Oral: Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape »
Yan Sun · Li Shen · Shixiang Chen · Liang Ding · Dacheng Tao -
2023 Oral: Tilted Sparse Additive Models »
Yingjie Wang · Hong Chen · Weifeng Liu · Fengxiang He · Tieliang Gong · YouCheng Fu · Dacheng Tao -
2023 Poster: Structured Cooperative Learning with Graphical Model Priors »
Shuangtong Li · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2023 Poster: Tilted Sparse Additive Models »
Yingjie Wang · Hong Chen · Weifeng Liu · Fengxiang He · Tieliang Gong · YouCheng Fu · Dacheng Tao -
2023 Poster: Improving the Model Consistency of Decentralized Federated Learning »
Yifan Shi · Li Shen · Kang Wei · Yan Sun · Bo Yuan · Xueqian Wang · Dacheng Tao -
2023 Poster: Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape »
Yan Sun · Li Shen · Shixiang Chen · Liang Ding · Dacheng Tao -
2023 Poster: Learning to Learn from APIs: Black-Box Data-Free Meta-Learning »
Zixuan Hu · Li Shen · Zhenyi Wang · Baoyuan Wu · Chun Yuan · Dacheng Tao -
2022 Poster: Identity-Disentangled Adversarial Augmentation for Self-supervised Learning »
Kaiwen Yang · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2022 Spotlight: Identity-Disentangled Adversarial Augmentation for Self-supervised Learning »
Kaiwen Yang · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2022 Poster: DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training »
Rong Dai · Li Shen · Fengxiang He · Xinmei Tian · Dacheng Tao -
2022 Spotlight: DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training »
Rong Dai · Li Shen · Fengxiang He · Xinmei Tian · Dacheng Tao -
2022 Poster: Topology-aware Generalization of Decentralized SGD »
Tongtian Zhu · Fengxiang He · Lan Zhang · Zhengyang Niu · Mingli Song · Dacheng Tao -
2022 Spotlight: Topology-aware Generalization of Decentralized SGD »
Tongtian Zhu · Fengxiang He · Lan Zhang · Zhengyang Niu · Mingli Song · Dacheng Tao -
2017 Poster: Beyond Filters: Compact Feature Map for Portable Deep Model »
Yunhe Wang · Chang Xu · Chao Xu · Dacheng Tao -
2017 Talk: Beyond Filters: Compact Feature Map for Portable Deep Model »
Yunhe Wang · Chang Xu · Chao Xu · Dacheng Tao -
2017 Poster: Algorithmic Stability and Hypothesis Complexity »
Tongliang Liu · Gábor Lugosi · Gergely Neu · Dacheng Tao -
2017 Talk: Algorithmic Stability and Hypothesis Complexity »
Tongliang Liu · Gábor Lugosi · Gergely Neu · Dacheng Tao