Skip to yearly menu bar Skip to main content


Poster

DPZero: Private Fine-Tuning of Language Models without Backpropagation

Liang Zhang · Bingcong Li · Kiran Thekumparampil · Sewoong Oh · Niao He


Abstract:

The widespread practice of fine-tuning large language models (LLMs) on domain-specific data faces two major challenges in memory and privacy. First, as the size of LLMs continues to grow, the memory demands of gradient-based training methods via backpropagation become prohibitively high. Second, given the tendency of LLMs to memorize training data, it is important to protect potentially sensitive information in the fine-tuning data from being regurgitated. Zeroth-order methods, which rely solely on forward passes, substantially reduce memory consumption during training. However, directly combining them with standard differentially private gradient descent suffers from growing model size. To bridge this gap, we introduce DPZero, a novel private zeroth-order algorithm with nearly dimension-independent rates. The memory efficiency of DPZero is demonstrated in privately fine-tuning RoBERTa on six downstream tasks.

Live content is unavailable. Log in and register to view live content