FedGain: Toward Negative-Gain-Free Client Collaboration in Federated Learning
Abstract
Data heterogeneity is a fundamental challenge in Federated Learning (FL), where induced model drift often results in "negative gains" for global models on data-abundant clients, with performance falling below that of local training. To address this issue, we propose FedGain, a novel framework that optimizes collaborative client clustering to mitigate the negative gain. We are the first to develop a modified Scaling Law to quantify the "reduction" in data utility caused by heterogeneity and define Effective Federated Capacity to align clients with the highest potential collaboration gains. Extensive experiments demonstrate that our modified SL strictly adheres to the power-law learning discipline in non-IID scenarios. FedGain effectively suppresses negative gains to a negligible level across various FL algorithms and outperforms other Clustered FL methods.