Heterogeneous Customizable Personalized Federated Fine-Tuning Approach for Large Language Models
Abstract
Personalized federated LoRA fine tuning has become a key approach to addressing data heterogeneity in distributed fine tuning of large language models (LLMs). Existing methods typically assume homogeneous personalization needs across clients, relying on dual LoRA or personalized calibration schemes. However, they fail to account for the heterogeneity of local personalization requirements and the conflicting optimization objectives in dual LoRA, limiting scalability and performance. To address this, we propose Het-CPFLoRA, a customizable heterogeneous federated LoRA fine tuning algorithm inspired by the decoupling properties of LoRA parameters. We employ a single adapter fine tuning scheme to mitigate conflicts between personalized and generalized optimization, decouple LoRA into generalized and personalized subspaces for local customization, and use SVD compression to integrate cross client generalized knowledge. During inference, we introduce an OOD oriented dynamic mechanism to adjust the weighting between personalized and generalized decoupling knowledge, improving performance on user data. Extensive experiments on two public benchmark datasets show that Het-CPFLoRA outperforms state of the art methods in both personalization and generalization across heterogeneous scenarios. The code will be released as an open-source project.