Timezone: »

Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Mrigank Raman · Pratyush Maini · Zico Kolter · Zachary Lipton · Danish Pruthi
Event URL: https://openreview.net/forum?id=UKeLFIqH8H »

In recent years, NLP practitioners have converged on the following practice:(i) import an off-the-shelf pretrained (masked) language model;(ii) append a multilayer perceptron atop the CLS token's hidden representation(with randomly initialized weights);and (iii) fine-tune the entire model on a downstream task (MLP-FT).This procedure has produced massive gains on standard NLP benchmarks,but these models remain brittle, even to mild adversarial perturbations,such as word-level synonym substitutions.In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP),an alternative method of adapting to downstream tasks.Rather than modifying the model (by appending an MLP head),MVP instead modifies the input (by appending a prompt template). Across three classification datasets,MVP improves performance against adversarial word-level synonym substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5%.By combining MVP with adversarial training, we achieve further improvements in robust accuracywhile maintaining clean accuracy.Finally, we conduct ablations to investigate the mechanism underlying these gains.Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.

Author Information

Mrigank Raman (Carnegie Mellon University)
Pratyush Maini (Carnegie Mellon University)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)
Zachary Lipton (CMU & Abridge)
Danish Pruthi (Indian Institute of Science, Bangalore )

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors