FedRGL: Robust Federated Graph Learning under Label Noise
Abstract
Federated Graph Learning (FGL) is a distributed machine learning paradigm based on graph neural networks, enabling secure and collaborative modeling of local graph data among clients. However, label noise in graph data can degrade the generalization performance of the global model. Existing federated label noise learning methods, primarily focused on computer vision tasks, often yield suboptimal results when directly applied to FGL. To address this issue, we propose a robust federated graph learning method with label noise, termed FedRGL. Specifically, FedRGL leverages the globally aggregated model and local subgraph structural information to implement a dual-perspective consistency noise-node filtering mechanism under class-aware dynamic thresholds. The resulting class-aware dual-consistency filtering (CADF) can also serve as a plug-and-play module, enhancing noise robustness across various subgraph federated learning frameworks. To better exploit the supervisory information from filtered noisy nodes, we employ the natural augmentation techniques from graph contrastive learning to assign high-confidence pseudo-labels to the noise nodes. Additionally, we measure model quality via the average predictive entropy of unlabeled nodes, enabling adaptive robust aggregation on the server side. Extensive experiments on real-world graph datasets show that FedRGL consistently outperforms existing methods under different noise rates, noise types, and client scales, achieving on average 5--8\% higher accuracy and up to 30\% improvement over the weakest baselines under noisy conditions. The anonymous source code is available at https://anonymous.4open.science/r/FedRGL_ICML26-376F.