Decoupled Low-Rank Adaptation for Robust Federated Fine-Tuning
Xiuwen Fang ⋅ Xuliang Yang ⋅ Mang Ye
Abstract
Federated Learning (FL) enables collaborative training across distributed clients while preserving data privacy. However, fine-tuning large-scale pre-trained models in FL is hindered by resource constraints and communication costs. Although introducing parameter-efficient fine-tuning strategies such as Low-Rank Adaptation (LoRA) effectively reduces trainable parameters, this low-rank constraint exacerbates noise sensitivity, leading to overfitting and aggregation bias. Existing robust federated fine-tuning methods rely on additional proxy data and treat low-rank adapters as generic weight vectors. In this paper, we investigate the structural properties of LoRA and reveal a robustness asymmetry. The down-projection matrix $A$ extracts stable general features, whereas the up-projection matrix $B$ is highly susceptible to fitting noise patterns. Based on this finding, we propose Federated Decoupled LoRA (FDLoRA), which employs a dual-branch mechanism to decouple robust feature learning from noise modeling and mitigates noise interference through noisy branch negative learning. During federated aggregation, we establish global consensus through aggregating $B$ while preserving local feature alignment in $A$. Extensive experiments demonstrate that FDLoRA outperforms existing state-of-the-art methods across various noisy federated scenarios. Our code and models will be released.
Successful Page Load