Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy Multi-modal Foundation Models and AI Agents (TiFA)

Chained Tuning Leads to Biased Forgetting

Megan Ung · Alicia Sun · Samuel Bell · Levent Sagun · Adina Williams


Abstract:

Large language models (LLMs) are often fine-tuned for use on downstream tasks, though this can degrade capabilities learned during previous training. This phenomenon, often referred to as catastrophic forgetting, has important potential implications for the safety of deployed models. In this work, we first show that models trained on downstream tasks forget their safety tuning to a greater extent than models trained in the opposite order. Second, we show that forgetting disproportionately impacts safety information about certain groups. To quantify this phenomenon, we define a new metric we term biased forgetting, and conduct a systematic evaluation of the effects of several fine-tuning methods and hyperparameters on forgetting. We hope our findings can better inform methods for chaining the fine-tuning of LLMs in continual learning settings to enable training of safer and less toxic models.

Chat is not available.