Poster
in
Workshop: Next Generation of AI Safety
Chained Tuning Leads to Biased Forgetting
Megan Ung · Alicia Sun · Samuel Bell · Levent Sagun · Adina Williams
Keywords: [ continual learning ] [ catastrophic forgetting ] [ language model safety ]
Large language models (LLMs) are often fine-tuned for use on downstream tasks, though this can degrade capabilities learned during previous training. This phenomenon, often referred to as catastrophic forgetting, has important potential implications for the safety of deployed models. In this work, we first show that models trained on downstream tasks forget their safety tuning to a greater extent than models trained in the opposite order.Second, we show that forgetting disproportionately impacts safety information about certain groups. To quantify this phenomenon, we define a new metric we term biased forgetting, and conduct a systematic evaluation of the effects of several fine-tuning methods and hyper-parameters on forgetting.We hope our findings can better inform methods for chaining the fine-tuning of LLMs in continual learning settings to enable training of safer and less toxic models.