FedTreeLoRA: Reconciling Statistical and Functional Heterogeneity in Federated LoRA Fine-Tuning
Abstract
Federated Learning (FL) with Low-Rank Adaptation (LoRA) has become a standard for privacy-preserving LLM fine-tuning. However, existing personalized methods predominantly operated under a restrictive Flat-Model Assumption: they addressed client-side statistical heterogeneity but treated the model as a monolithic block, ignoring the functional heterogeneity across LLM layers. We argue that these two statistical (horizontal) and functional (vertical) dimensions, are orthogonal in source yet coupled in interaction, implying that the optimal depth of parameter sharing is functionally dependent on client similarity. To address this, we propose FedTreeLoRA, a framework employing tree-structured aggregation for fine-grained, layer-wise alignment. By dynamically constructing an aggregation hierarchy, FedTreeLoRA allows clients to share broad consensus on shallow ’trunks‘ while progressively specializing on deep ‘branches'. Experiments on NLU and NLG benchmarks demonstrate that FedTreeLoRA significantly outperforms state-of-the-art methods by effectively reconciling generalization and personalization.