Timezone: »
The federated learning (FL) framework enables edge clients to collaboratively learn a shared inference model while keeping privacy of training data on clients. Recently, many heuristics efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings for improving convergence and accuracy. However, there is still a paucity of theoretical principles on where to and how to design and utilize adaptive optimization methods in federated settings. This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs). First, an analytic framework is established to build a connection between federated optimization methods and decompositions of ODEs of corresponding centralized optimizers. Second, based on this analytic framework, a momentum decoupling adaptive optimization method, FedDA, is developed to fully utilize the global momentum on each local iteration and accelerate the training convergence. Last but not least, full batch gradients are utilized to mimic centralized optimization in the end of the training process to ensure the convergence and overcome the possible inconsistency caused by adaptive optimization methods.
Author Information
Jiayin Jin (Auburn University)
Jiaxiang Ren (Auburn University)
Yang Zhou (Auburn University)
Lingjuan Lyu (Sony AI Inc.)
Ji Liu (Baidu research)
Dejing Dou (Baidu)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Thu. Jul 21st through Fri the 22nd Room Hall E
More from the Same Authors
-
2022 Poster: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2022 Spotlight: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2021 Poster: Integrated Defense for Resilient Graph Matching »
Jiaxiang Ren · Zijie Zhang · Jiayin Jin · Xin Zhao · Sixing Wu · Yang Zhou · Yelong Shen · Tianshi Che · Ruoming Jin · Dejing Dou -
2021 Spotlight: Integrated Defense for Resilient Graph Matching »
Jiaxiang Ren · Zijie Zhang · Jiayin Jin · Xin Zhao · Sixing Wu · Yang Zhou · Yelong Shen · Tianshi Che · Ruoming Jin · Dejing Dou -
2021 Poster: Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks »
Xin Zhao · Zeru Zhang · Zijie Zhang · Lingfei Wu · Jiayin Jin · Yang Zhou · Ruoming Jin · Dejing Dou · Da Yan -
2021 Spotlight: Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks »
Xin Zhao · Zeru Zhang · Zijie Zhang · Lingfei Wu · Jiayin Jin · Yang Zhou · Ruoming Jin · Dejing Dou · Da Yan -
2021 : FedCube: Federated Learning and Data Federation for Collaborative Data Processing »
Ji Liu -
2021 Expo Workshop: PaddlePaddle-based Deep Learning at Baidu »
Dejing Dou · Chenxia Li · Teng Xi · Dingfu Zhou · Tianyi Wu · Xuhong Li · Zhengjie Huang · Guocheng Niu · Ji Liu · Yaqing Wang · Xin Wang · Qianwei Cai -
2021 : Opening Remarks »
Dejing Dou -
2020 Poster: RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr »
Xingjian Li · Haoyi Xiong · Haozhe An · Cheng-Zhong Xu · Dejing Dou -
2020 Expo Talk Panel: Baidu AutoDL: Automated and Interpretable Deep Learning »
Bolei Zhou · Yi Yang · Quanshi Zhang · Dejing Dou · Haoyi Xiong · Jiahui Yu · Humphrey Shi · Linchao Zhu · Xingjian Li