Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild

Combining Pre-trained LoRA Modules Improves Few-shot Adaptation of Foundation Models to New Tasks

Nader Asadi · Mahdi Beitollahi · Yasser Khalil · Yinchuan Li · Guojun Zhang · Xi Chen

Keywords: [ Parameter-efficient Fine-tuning ] [ Few-shot adaptation ] [ model merging ]


Abstract:

The efficiency of low-rank adaptation (LoRA) has facilitated the creation and sharing of hundreds of custom LoRA modules for various downstream tasks. In this paper, we explore the composability of LoRA modules, examining if combining these pre-trained modules enhances the generalization of foundation models to unseen downstream tasks. Our investigation involves evaluating two approaches: (a) uniform composition, involving averaging upstream LoRA modules with equal weights, and (b) learned composition, where we learn the weights for each upstream module and perform weighted averaging. Our experimental results on both vision and language models reveal that in few-shot settings, where only a limited number of samples are available for the downstream task, both uniform and learned composition methods result in better transfer accuracy; outperforming full fine-tuning and training a LoRA from scratch. Our research unveils the potential of composition strategies for enhancing the transferability of foundation models in low-shot settings.

Chat is not available.