Skip to yearly menu bar Skip to main content


Spotlight

Debiasing Model Updates for Improving Personalized Federated Training

Durmus Alp Emre Acar · Yue Zhao · Ruizhao Zhu · Ramon Matas · Matthew Mattina · Paul Whatmough · Venkatesh Saligrama

[ ] [ Livestream: Visit Multi-modal and Meta Learning ] [ Paper ]
[ Paper ]

Abstract:

We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.

Chat is not available.