Beyond the Trade-off: Unifying Fairness and Performance in Federated Learning
Abstract
Federated Learning (FL) often suffers from a trade-off between global model performance and client-level fairness due to data heterogeneity, which often leads to inconsistent performance of the globally trained models, resulting in unfair outcomes among users. Existing fair FL algorithms face a trade-off: they either sacrifice global model performance to promote fairness or fall short of achieving optimal fairness. In this paper, we propose a novel framework that bridge this trade-off by integrating information-theoretic principles with model alignment. Specifically, we leverage the Maximum Entropy Principle to derive an analytic, closed-form solution for fair aggregation weights, ensuring significant fairness enhancements with minimal computational overhead. To maintain the global model performance, we further employ a step-wise model alignment strategy that synchronizes gradient directions across heterogeneous clients, effectively mitigating the drift induced by local updates. Theoretical analysis proves that our method guarantees convergence even in non-convex settings. Importantly, we push the theoretical frontier of federated fairness by extending performance variance analysis to generalized regression, providing broader guarantees. Extensive experiments on five datasets demonstrate that our approach consistently outperforms state-of-the-art methods, achieving superior fairness without sacrificing global accuracy.