Beyond Static Allocation: Dynamic Sensitivity-Aware Fine-Tuning for Vision Transformers
Abstract
Existing Parameter-Efficient Fine-Tuning (PEFT) methods are fundamentally constrained by a static allocation paradigm, which overlooks the model's evolving optimization priorities during training. To address this, we introduce Dynamic Adaptive Fine-tuning (DAF), a novel framework that periodically evaluates and reconfigures the trainable structure based on a context-aware decoupled sensitivity analysis. DAF employs a Rebuild-and-Refocus strategy to preserve learned knowledge by freezing outdated modules while decisively reallocating the parameter budget to newly identified critical regions. Extensive experiments on challenging vision benchmarks demonstrate that DAF significantly outperforms mainstream static PEFT methods and achieves state-of-the-art (SOTA) performance and efficiency, particularly under extreme parameter budgets. Our work fundamentally challenges the static nature of the field, offering a more intelligent and efficient paradigm for adapting large pretrained models. The code is available at https://anonymous.4open.science/r/DAF-9372.