CSPLoRA: Confidence-Guided Structure Planning for Low-Rank Adaptation
Abstract
Low-Rank Adaptation (LoRA) has become the de facto paradigm for parameter-efficient fine-tuning, with its effectiveness critically influenced by rank allocation across modules. However, existing approaches face a fundamental dilemma: uniform allocation ignores module heterogeneity, while adaptive methods introduce expensive training overhead or lack reusability across configurations. We propose \textbf{CSPLoRA} (Confidence-guided Structural Planning for LoRA), a decoupled framework that reweights probe samples by prediction uncertainty to obtain higher-fidelity module importance estimates. The key insight is that hard samples---those the model struggles with---provide more informative gradient signals for identifying critical modules than easy samples. Combined with scale-invariant allocation, our method produces reusable structural priors that transfer across different rank budgets and LoRA backends, enabling "probe once, deploy everywhere." Experiments on GLUE, commonsense reasoning, and arithmetic tasks show that CSPLoRA consistently improves over uniform LoRA (+1.25 points on LLaMA-2-7B commonsense reasoning) while maintaining comparable parameters, with the same structure transferring directly to other LoRA variants.