Poster
in
Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities
A Joint Training-Calibration Framework for Test-Time Personalization with Label Distribution Shift in Federated Learning
Jian Xu · Shao-Lun Huang
The data heterogeneity has been a challenging issue in federated learning in both training and inference stages, which motivates a variety of approaches to learn either personalized models for participating clients or test-time adaptations for unseen clients. One such approach is employing a shared feature representation and a customized classifier head for each client. However, previous works either do not utilize the global head with rich knowledge or assume the new clients have enough labeled data, which significantly limit their broader practicality. In this work, we propose a lightweight framework to tackle the label shift issue in model deployment by test priors estimation and model prediction calibration. We emphasize the importance of training a balanced global model in FL and the general effectiveness of prior estimation approaches. Numerical evaluation results on benchmark datasets with various label distribution shift cases demonstrate the superiority of our proposed framework.