Skip to yearly menu bar Skip to main content


Contributed Talk & Poster
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)

SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors

Vijay Lingam · Atula Tejaswi · Aditya Vavre · Aneesh Shetty · Gautham Krishna Gudur · Joydeep Ghosh · Eunsol Choi · Alexandros Dimakis · Aleksandar Bojchevski · Sujay Sanghavi


Abstract:

Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights (\mathbf{W}) and inject learnable matrices (\mathbf{\Delta W}). These (\mathbf{\Delta W}) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on (\mathbf{\Delta W}) depends on the specific weight matrix (\mathbf{W}). Specifically, SVFT updates (\mathbf{W}) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget.

Chat is not available.