Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

Reinforcement Learning Assisted Layer-wise Fine-Tuning for Transfer Learning

Tanvir Mahmud · Natalia Frumkin · Diana Marculescu


Abstract:

Data scarcity is one of the major challenges in many real-world applications. To handle low-data regimes, practitioners often take an existing pre-trained network and fine-tune it on a data-deficient target task. In this setup, a network is pre-trained on a source dataset and fine-tuned on a different, potentially smaller, target dataset. We address two critical challenges with transfer learning via fine-tuning: (1) The required amount of fine-tuning greatly depends on the distribution shift from source to target dataset. (2) Layer-wise adjustments allow for the model to adapt to this distribution shift while also preserving the pre-trained network’s feature extractor. To overcome the challenges, we propose RL-Tune, a layer-wise fine-tuning framework for transfer learning which leverages reinforcement learning to adjust learning rates as a function of the target data shift. In our RL framework, the state is a collection of the intermediate feature activations generated with training samples. To accommodate different abstraction levels of layers, the agent generates layer-wise learning rates as actions for fine-tuning based on the current state and obtains the sample accuracy as a reward. RL-Tune outperforms other state-of-the-art approaches on standard transfer learning benchmarks by a large margin, e.g., 6.2% mean accuracy improvement on CUBS-200-2011 with 15% data.

Chat is not available.