Homophily-Heterogeneity Gradient Surgery for Federated Graph Learning
Abstract
Federated Graph Learning (FGL) facilitates privacy-preserving collaborative training of graph neural networks, yet homophily heterogeneity across subgraphs triggers optimization conflicts that degrade model generalization. Most existing solutions rely on multi-channel architectures to mitigate such conflict, which increase the burden on edge devices and lack theoretical convergence guarantees. To overcome these limitations, we propose FedGCM, a novel FGL framework with Group-oriented Conflict Mitigation, which aligns inconsistent optimization objectives via a tailored gradient surgery scheme. Specifically, FedGCM first divides clients into distinct groups based on their homophily levels, a strategy that precludes exhaustive client-to-client conflict assessments. To resolve inter-group interference, we develop RPGrad, a gradient surgery mechanism based on residual projection, which integrates synergistic knowledge while filtering inter-group conflicts. The refined updates are then transmitted in a group-wise fashion, effectively alleviating optimization conflicts induced by homophily heterogeneity without augmenting the client-side burden. Furthermore, we provide a formal theoretical analysis establishing the convergence of FedGCM. Extensive experiments on both homophilous and heterophilous graphs demonstrate that FedGCM consistently achieves advanced performance.