Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ES-FoMo II: 2nd Workshop on Efficient Systems for Foundation Models

Unlocking the Global Synergies in Low-Rank Adapters

Zixi Zhang · Cheng Zhang · Xitong Gao · Robert Mullins · George Constantinides · Yiren Zhao


Abstract:

Low-rank Adaption (LoRA) has been the de-facto parameter-efficient fine-tuning technique for large language models. We present HeteroLoRA, a lightweight search algorithm that leverages zero-cost proxies to allocate the limited LoRA trainable parameters across the model for better fine-tuned performance. In addition to the allocation for the standard LoRA-adapted models, we also demonstrate the efficacy of HeteroLoRA by performing the allocation in a more challenging search space that includes LoRA modules and LoRA-adapted shortcut connections. Experiments show that HeteroLoRA enables improvements in model performance given the same parameter budget. For example, on MRPC, we see an improvement of 1.6% in accuracy with similar training parameter budget. We have open-sourced our algorithm.

Chat is not available.