FedScar: Correcting Geometric Bias for Flatness-Consistent Federated Learning
Abstract
Federated Learning (FL) often suffers from degraded generalization under statistical heterogeneity, where client updates systematically deviate from the global objective. While recent Sharpness-Aware Minimization (SAM) methods promote locally flat solutions, they implicitly assume that local flatness transfers to the global model, which generally does not hold under heterogeneous data distributions. This mismatch gives rise to a flatness discrepancy induced by misaligned loss landscapes. To address this issue, we propose FedScar, a federated optimization framework that explicitly corrects heterogeneity-induced geometric inconsistency. FedScar maintains a history-accumulated geometric bias to capture persistent curvature skew across clients, and employs a variance-aware injection mechanism to steer local updates toward regions that are flat with respect to the global objective. We provide a theoretical interpretation of FedScar as a Split-Dual ADMM formulation, which jointly enforces parameter consensus and geometric alignment. Extensive experiments under severe heterogeneity demonstrate that FedScar consistently reduces flatness discrepancy and improves generalization over state-of-the-art methods, without incurring additional communication overhead.