Timezone: »
Subgraphs of a larger global graph may be distributed across multiple devices, and only locally accessible due to privacy restrictions, although there may be links between subgraphs. Recently proposed subgraph Federated Learning (FL) methods deal with those missing links across local subgraphs while distributively training Graph Neural Networks (GNNs) on them. However, they have overlooked the inevitable heterogeneity between subgraphs comprising different communities of a global graph, consequently collapsing the incompatible knowledge from local GNN models. To this end, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs rather than learning a single global model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it. Since the server cannot access the subgraph in each client, FED-PUB utilizes functional embeddings of the local GNNs using random graphs as inputs to compute similarities between them, and use the similarities to perform weighted averaging for server-side aggregation. Further, it learns a personalized sparse mask at each client to select and update only the subgraph-relevant subset of the aggregated parameters. We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs, on which it significantly outperforms relevant baselines. Our code is available at https://github.com/JinheonBaek/FED-PUB.
Author Information
Jinheon Baek (KAIST)
Wonyong Jeong (KAIST)
Jiongdao Jin (KAIST)
Jaehong Yoon (KAIST)
Sung Ju Hwang (UNIST)
More from the Same Authors
-
2021 : Entropy Weighted Adversarial Training »
Minseon Kim · Jihoon Tack · Jinwoo Shin · Sung Ju Hwang -
2021 : Consistency Regularization for Adversarial Robustness »
Jihoon Tack · Sihyun Yu · Jongheon Jeong · Minseon Kim · Sung Ju Hwang · Jinwoo Shin -
2023 : Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations »
Hyeonjeong Ha · Minseon Kim · Sung Ju Hwang -
2023 Poster: Exploring Chemical Space with Score-based Out-of-distribution Generation »
Seul Lee · Jaehyeong Jo · Sung Ju Hwang -
2023 Poster: Continual Learners are Incremental Model Generalizers »
Jaehong Yoon · Sung Ju Hwang · Yue Cao -
2023 Poster: Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation »
Jeffrey Willette · Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2023 Poster: Margin-based Neural Network Watermarking »
Byungjoo Kim · Suyoung Lee · Seanie Lee · Son · Sung Ju Hwang -
2022 Poster: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Poster: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Spotlight: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2021 Poster: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2020 : Technical Talks Session 1 »
Ishika Singh · Laura Rieger · Rasmus Høegh · Hanlin Lu · Wonyong Jeong -
2017 Poster: Combined Group and Exclusive Sparsity for Deep Neural Networks »
jaehong yoon · Sung Ju Hwang -
2017 Talk: Combined Group and Exclusive Sparsity for Deep Neural Networks »
jaehong yoon · Sung Ju Hwang