Skip to yearly menu bar Skip to main content


Oral
in
Workshop: ES-FoMo: Efficient Systems for Foundation Models

🎤 Memory-Efficient Selective Fine-Tuning

Antoine Simoulin · Namyong Park · Xiaoyi Liu · Grey Yang


Abstract:

We propose an approach for reducing the memory required to fine-tune transformer-based models. During the backward pass, our approach only propagates the gradient through a small number of input positions, while freezing the others. Thus, we only save a subset of the intermediate activations during the forward pass, for which the computed gradient will not be zero. We show that our approach leads to performance on-par with full fine-tuning, while requiring only up to a third of the GPU memory. Our approach is specifically efficient in fine-tuning language models with a number of parameters lying around hundred of millions. It allows to fine-tune such models on consumer hardware, while maintaining a large batch size.

Chat is not available.