FedARC: Anchor-Guided Residual Compensation for Data and Model Heterogeneous Federated Learning
Chentao Lu ⋅ Xuhao Ren ⋅ Dawei xu ⋅ Chuan Zhang ⋅ Liehuang Zhu
Abstract
Federated learning (FL) allows clients to collaboratively train models without exposing private data, but practical FL is simultaneously challenged by data heterogeneity and model heterogeneity. Prior heterogeneous FL (HtFL) approaches often fail to handle fine-grained feature shifts, leading to weak representation alignment and limited cross-client knowledge transfer, which degrades both personalization and generalization. We propose FedARC, an HtFL framework that couples a shared lightweight extractor with client-specific fusion: a trainable projector integrates local and global embeddings, while adaptive residual compensation dynamically corrects feature-level mismatches. To further stabilize aggregation, FedARC performs semantic anchor alignment across clients, and we theoretically prove FedARC converges with a non-convex convergence rate $\mathcal{O}(1/T)$. Experiments on five public benchmarks show that FedARC outperforms nine state-of-the-art HtFL baselines by up to 2.63\% in average accuracy, while maintaining efficient communication and computation.
Successful Page Load