Timezone: »
GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the examples can cause accuracy to vary from near chance to near state-of-the-art. We demonstrate that this instability arises from the bias of language models towards predicting certain answers, e.g., those that are placed near the end of the prompt or are common in the pre-training data. To mitigate this, we first estimate the model's bias towards each answer by asking for its prediction when given a training prompt and a content-free test input such as "N/A". We then fit calibration parameters that cause the prediction for this input to be uniform across answers. On a diverse set of tasks, this contextual calibration procedure substantially improves GPT-3 and GPT-2's accuracy (up to 30.0% absolute) across different choices of the prompt, while also making learning considerably more stable.
Author Information
Tony Z. Zhao (UC Berkeley)
Eric Wallace (U.C. Berkeley)
Shi Feng (University of Maryland)
Dan Klein (UC Berkeley)
Sameer Singh (University of California, Irvine)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Oral: Calibrate Before Use: Improving Few-shot Performance of Language Models »
Fri. Jul 23rd 01:00 -- 01:20 AM Room
More from the Same Authors
-
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Learning Space Partitions for Path Planning »
Kevin Yang · Tianjun Zhang · Chris Cummins · Brandon Cui · Benoit Steiner · Linnan Wang · Joseph E Gonzalez · Dan Klein · Yuandong Tian -
2021 : Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention »
Abhishek Gupta · Justin Yu · Tony Z. Zhao · Vikash Kumar · Aaron Rovinsky · Kelvin Xu · Thomas Devlin · Sergey Levine -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2023 Poster: Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling »
Kolby Nottingham · Prithviraj Ammanabrolu · Alane Suhr · Yejin Choi · Hannaneh Hajishirzi · Sameer Singh · Roy Fox -
2023 Poster: Poisoning Language Models During Instruction Tuning »
Alexander Wan · Eric Wallace · Sheng Shen · Dan Klein -
2023 Poster: Large Language Models Struggle to Learn Long-Tail Knowledge »
Nikhil Kandpal · Haikang Deng · Adam Roberts · Eric Wallace · Colin Raffel -
2022 Poster: Deduplicating Training Data Mitigates Privacy Risks in Language Models »
Nikhil Kandpal · Eric Wallace · Colin Raffel -
2022 Spotlight: Deduplicating Training Data Mitigates Privacy Risks in Language Models »
Nikhil Kandpal · Eric Wallace · Colin Raffel -
2022 Poster: Describing Differences between Text Distributions with Natural Language »
Ruiqi Zhong · Charlie Snell · Dan Klein · Jacob Steinhardt -
2022 Spotlight: Describing Differences between Text Distributions with Natural Language »
Ruiqi Zhong · Charlie Snell · Dan Klein · Jacob Steinhardt -
2021 : Spotlight »
Zhiwei (Tony) Qin · Xianyuan Zhan · Meng Qi · Ruihan Yang · Philip Ball · Hamsa Bastani · Yao Liu · Xiuwen Wang · Haoran Xu · Tony Z. Zhao · Lili Chen · Aviral Kumar -
2020 Poster: Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers »
Zhuohan Li · Eric Wallace · Sheng Shen · Kevin Lin · Kurt Keutzer · Dan Klein · Joseph Gonzalez -
2019 Poster: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Oral: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2017 Poster: Modular Multitask Reinforcement Learning with Policy Sketches »
Jacob Andreas · Dan Klein · Sergey Levine -
2017 Talk: Modular Multitask Reinforcement Learning with Policy Sketches »
Jacob Andreas · Dan Klein · Sergey Levine