Timezone: »
In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift. We propose a different approach named virtual homogeneity learning (VHL) to directly ``rectify'' the data heterogeneity. In particular, VHL conducts FL with a virtual homogeneous dataset crafted to satisfy two conditions: containing \emph{no} private information and being separable. The virtual dataset can be generated from pure noise shared across clients, aiming to calibrate the features from the heterogeneous clients. Theoretically, we prove that VHL can achieve provable generalization performance on the natural distribution. Empirically, we demonstrate that VHL endows FL with drastically improved convergence speed and generalization performance. VHL is the first attempt towards using a virtual dataset to address data heterogeneity, offering new and effective means to FL.
Author Information
Zhenheng Tang (Hong Kong Baptist University)
Yonggang Zhang (Hong Kong Baptist University)
Shaohuai Shi (The Hong Kong University of Science and Technology)
Xin He (Hong Kong Baptist University)
Bo Han (HKBU / RIKEN)
Xiaowen Chu (Hong Kong University of Science and Technology (Guangzhou))
Dr. Chu received his B.Eng. degree in Computer Science from Tsinghua University, Beijing, P. R. China, in 1999, and the Ph.D. degree in Computer Science from The Hong Kong University of Science and Technology in 2003. He is currently a Professor at the Data Science and Analytics Thrust, Information Hub of HKUST(GZ), and an Affiliate Professor in the Department of Computer Science and Engineering, HKUST. He has been working at the Department of Computer Science, Hong Kong Baptist University during 2003-2021. He is a senior member of IEEE and a member of ACM. He is a vice-chairman of the Blockchain Technical Committee of China Institute of Communications. His current research interests include GPU Computing, Distributed Machine Learning, Cloud Computing, and Wireless Networks. He is especially interested in the modelling, parallel algorithm design, application optimization, and energy efficiency of GPU computing.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning »
Thu. Jul 21st 08:20 -- 08:25 PM Room Room 327 - 329
More from the Same Authors
-
2022 : Invariance Principle Meets Out-of-Distribution Generalization on Graphs »
Yongqiang Chen · Yonggang Zhang · Yatao Bian · Han Yang · Kaili MA · Binghui Xie · Tongliang Liu · Bo Han · James Cheng -
2022 : Pareto Invariant Risk Minimization »
Yongqiang Chen · Kaiwen Zhou · Yatao Bian · Binghui Xie · Kaili MA · Yonggang Zhang · Han Yang · Bo Han · James Cheng -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Contrastive Learning with Boosted Memorization »
Zhihan Zhou · Jiangchao Yao · Yan-Feng Wang · Bo Han · Ya Zhang -
2022 Spotlight: Contrastive Learning with Boosted Memorization »
Zhihan Zhou · Jiangchao Yao · Yan-Feng Wang · Bo Han · Ya Zhang -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Poster: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Poster: Improving Adversarial Robustness via Mutual Information Estimation »
Dawei Zhou · Nannan Wang · Xinbo Gao · Bo Han · Xiaoyu Wang · Yibing Zhan · Tongliang Liu -
2022 Spotlight: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Spotlight: Improving Adversarial Robustness via Mutual Information Estimation »
Dawei Zhou · Nannan Wang · Xinbo Gao · Bo Han · Xiaoyu Wang · Yibing Zhan · Tongliang Liu -
2022 Spotlight: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Poster: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Spotlight: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2021 Poster: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2021 Spotlight: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2020 Poster: Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks »
Yonggang Zhang · Ya Li · Tongliang Liu · Xinmei Tian