Timezone: »

Calibrate Before Use: Improving Few-shot Performance of Language Models
Tony Z. Zhao · Eric Wallace · Shi Feng · Dan Klein · Sameer Singh

Thu Jul 22 06:00 PM -- 06:20 PM (PDT) @ None

GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the examples can cause accuracy to vary from near chance to near state-of-the-art. We demonstrate that this instability arises from the bias of language models towards predicting certain answers, e.g., those that are placed near the end of the prompt or are common in the pre-training data. To mitigate this, we first estimate the model's bias towards each answer by asking for its prediction when given a training prompt and a content-free test input such as "N/A". We then fit calibration parameters that cause the prediction for this input to be uniform across answers. On a diverse set of tasks, this contextual calibration procedure substantially improves GPT-3 and GPT-2's accuracy (up to 30.0% absolute) across different choices of the prompt, while also making learning considerably more stable.

Author Information

Tony Z. Zhao (UC Berkeley)
Eric Wallace (U.C. Berkeley)
Shi Feng (University of Maryland)
Dan Klein (UC Berkeley)
Sameer Singh (University of California, Irvine)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors