FedPissa: Towards Federated Personalized Adaptation of Foundation Models via LoRA Subspace Mapping
Abstract
LoRA efficiently adapts large pre-trained models via low-rank updates, making it a strong parameter-efficient fine-tuning (PEFT) method. When integrated with Federated Learning (FL), it enables collaborative fine-tuning across distributed clients, leveraging rich downstream data without exposing private information. However, this strategy is hindered by data heterogeneity and limits personalization performance. To address this, personalized FedLoRA approaches have been proposed and employ a dual-LoRA architecture, e.g., one branch for global knowledge and another for client-specific adaptation. Nevertheless, this dual-LoRA design introduces additional computational overhead and structural redundancy. To address this limitation, we propose FedPissa, the first framework that rethinks single-LoRA via selective aggregation and subspace decorrelation. We selectively aggregate LoRA components based on their aggregation dynamics, and further apply a decorrelated subspace projection to mitigate heterogeneous update conflicts, reducing cross-client interference and improving personalized adaptation. Experiments on textual and visual scenarios show that FedPissa not only achieves up to 35% lower communication and computation cost, but also improves overall accuracy by up to 8% compared to its counterparts.