Timezone: »
There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our FedWeIT against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost.
Author Information
Jaehong Yoon (KAIST)
Wonyong Jeong (Korea Advanced Institute of Science and Technology)
GiWoong Lee (Agency for Defense Development)
Eunho Yang (KAIST,AITRICS)
Sung Ju Hwang (KAIST, AITRICS)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Federated Continual Learning with Weighted Inter-client Transfer »
Tue. Jul 20th 04:00 -- 06:00 PM Room
More from the Same Authors
-
2023 Poster: Personalized Subgraph Federated Learning »
Jinheon Baek · Wonyong Jeong · Jiongdao Jin · Jaehong Yoon · Sung Ju Hwang -
2023 Poster: Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation »
Yeonsung Jung · Hajin Shim · June Yong Yang · Eunho Yang -
2023 Poster: RGE: A Repulsive Graph Rectification for Node Classification via Influence »
Jaeyun Song · Sungyub Kim · Eunho Yang -
2023 Poster: Continual Learners are Incremental Model Generalizers »
Jaehong Yoon · Sung Ju Hwang · Yue Cao -
2022 Poster: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Spotlight: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Poster: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Poster: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Poster: TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification »
Jaeyun Song · Joonhyung Park · Eunho Yang -
2022 Poster: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Spotlight: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification »
Jaeyun Song · Joonhyung Park · Eunho Yang -
2022 Spotlight: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2021 Poster: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Spotlight: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Poster: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2020 : Technical Talks Session 1 »
Ishika Singh · Laura Rieger · Rasmus Høegh · Hanlin Lu · Wonyong Jeong -
2020 Poster: Cost-Effective Interactive Attention Learning with Neural Attention Processes »
Jay Heo · Junhyeon Park · Hyewon Jeong · Kwang Joon Kim · Juho Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Meta Variance Transfer: Learning to Augment from the Others »
Seong-Jin Park · Seungju Han · Ji-won Baek · Insoo Kim · Juhwan Song · Hae Beom Lee · Jae-Joon Han · Sung Ju Hwang -
2020 Poster: Self-supervised Label Augmentation via Input Transformations »
Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Adversarial Neural Pruning with Latent Vulnerability Suppression »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2019 Poster: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin -
2019 Poster: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Oral: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin -
2019 Oral: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Poster: Trimming the $\ell_1$ Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning »
Jihun Yun · Peng Zheng · Eunho Yang · Aurelie Lozano · Aleksandr Aravkin -
2019 Oral: Trimming the $\ell_1$ Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning »
Jihun Yun · Peng Zheng · Eunho Yang · Aurelie Lozano · Aleksandr Aravkin -
2018 Poster: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2018 Oral: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2017 Poster: Combined Group and Exclusive Sparsity for Deep Neural Networks »
jaehong yoon · Sung Ju Hwang -
2017 Poster: Sparse + Group-Sparse Dirty Models: Statistical Guarantees without Unreasonable Conditions and a Case for Non-Convexity »
Eunho Yang · Aurelie Lozano -
2017 Talk: Sparse + Group-Sparse Dirty Models: Statistical Guarantees without Unreasonable Conditions and a Case for Non-Convexity »
Eunho Yang · Aurelie Lozano -
2017 Talk: Combined Group and Exclusive Sparsity for Deep Neural Networks »
jaehong yoon · Sung Ju Hwang -
2017 Poster: Ordinal Graphical Models: A Tale of Two Approaches »
ARUN SAI SUGGALA · Eunho Yang · Pradeep Ravikumar -
2017 Talk: Ordinal Graphical Models: A Tale of Two Approaches »
ARUN SAI SUGGALA · Eunho Yang · Pradeep Ravikumar