DREAM: A Unified Framework for Drift-Corrected Federated Multi-Objective Learning
Yuan Zhou ⋅ Yidan Ou ⋅ Xinli Shi
Abstract
Federated Multi-Objective Learning (FMOL) enables collaborative training of conflicting objectives but faces a compounded challenge: the recursive coupling between intra-task client drift and inter-task aggregation bias. We propose DREAM, a unified framework that jointly corrects these two coupled error sources through drift-aware control variates and momentum-smoothed local updates. On the server side, DREAM formulates multi-objective aggregation as a regularized quadratic program parameterized by a task correction matrix, which provides a generalized formulation that can flexibly adapt to scalarization, prioritization, and gradient manipulation strategies. Theoretically, we establish a linear speedup convergence rate of $\mathcal{O}(1/\sqrt{NT})$ for non-convex objectives. We further provide theoretical guarantees for the conflict-avoidant descent direction. In the strongly convex setting, DREAM achieves convergence in weighted sub-optimality and admits a unified Lyapunov analysis showing linear convergence to a regularization-dependent neighborhood. Numerical experiments validate the superior performance and effectiveness of DREAM in practice.
Successful Page Load