Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Augmenting the Knowledge to Large Model from Federated Small Models

Miru Kim · Minhae Kwon


Abstract:

Personalized Federated Learning (pFL) is a type of Federated Learning (FL) that divides a model into personalized and shared parts to address heterogeneity problems in distributed data environments. The pFL can optimize each distributed data using a personalized part. However, since the model of participating clients in pFL is usually shallow and narrow, it can limit the potential for performance improvement like System \RomanNumeralCaps 1 of dual system theory. In this paper, we aim to address the performance constraints caused by the limited capacity of clients while transferring knowledge in the opposite direction of conventional knowledge distillation methods. The proposed approach {\emph{Knowledge Augmentation} transfers the knowledge of the clients’ small models to a large model, which operates like System \RomanNumeralCaps 2 in dual system theory, in the central server. To guarantee client privacy, the large model uses the output of the personalized section as input data rather than sharing local data.

Chat is not available.