Shift-Dependent Asymmetry: Orthogonal Inverse Low-Rank Adaptation for Federated Medical Segmentation
Abstract
Low-Rank Adaptation (LoRA) enables efficient federated fine-tuning of segmentation foundation models for medical imaging. However, most federated LoRA methods adopt a uniform aggregation rule, which breaks under the encoder–decoder asymmetry in medical segmentation: the encoder is dominated by appearance shifts, while the decoder is dominated by supervision variations. This mismatch entangles shared anatomy with site-specific biases and harms generalization. To address this, we propose Inverse Asymmetric Tuning (IAT). IAT aligns adaptation with heterogeneity sources by selectively personalizing module-specific adaptation components in the encoder to absorb acquisition-driven appearance shifts and in the decoder to accommodate site-dependent supervision, while retaining a shared pathway for transferable consensus. However, structural separation alone is insufficient under LoRA’s bilinear parameterization, where multiplicative coupling can still cause site-specific updates to leak into the shared update direction. We therefore introduce a Subspace Orthogonality Regularizer that penalizes shared–local collinearity in the effective update space, mitigating leakage without increasing communication. Extensive experiments demonstrate consistent improvements over strong federated LoRA and parameter-efficient FL baselines.