Timezone: »

 
Memorization in NLP Fine-tuning Methods
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick
Event URL: https://openreview.net/forum?id=TUJYLRf2caH »

Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the ``pre-train and fine-tune'' paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.

Author Information

FatemehSadat Mireshghallah (University of California San Diego)
FatemehSadat Mireshghallah (University of California San Diego)
Archit Uniyal (Panjab University, Chandigarh, India)
Archit Uniyal (Panjab University, Chandigarh, India)
Tianhao Wang (University of Virginia, Charlottesville)
Tianhao Wang (University of Virginia, Charlottesville)
David Evans (University of Virginia)
David Evans (University of Virginia)
Taylor Berg-Kirkpatrick (University of California San Diego)
Taylor Berg-Kirkpatrick (University of California San Diego)

More from the Same Authors