Timezone: »

The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
Shayne Longpre · Le Hou · Tu Vu · Albert Webson · Hyung Won Chung · Yi Tay · Denny Zhou · Quoc Le · Barret Zoph · Jason Wei · Adam Roberts

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #130

We study the design decision of publicly available instruction tuning methods, by reproducing and breaking down the development of Flan 2022 (Chung et al., 2022). Through careful ablation studies on the Flan Collection of tasks and methods, we tease apart the effect of design decisions which enable Flan-T5 to outperform prior work by 3-17% across evaluation settings. We find task balancing and enrichment techniques are overlooked but critical to effective instruction tuning, and in particular, training with mixed prompt settings (zero-shot, few-shot, chain-of-thought) actually yields equivalent or stronger (2%) performance in all settings. In further experiments we show Flan-T5 requires less finetuning to converge higher and faster than T5 on single downstream tasks -- motivating instruction-tuned models as more computationally-efficient starting checkpoints for new tasks. Finally, to accelerate research on instruction tuning, we make the Flan 2022 collection of datasets, templates, and methods publicly available.

Author Information

Shayne Longpre (Massachusetts Institute of Technology)
Le Hou (Google Research)
Tu Vu (College of Information and Computer Science, University of Massachusetts, Amherst)
Albert Webson (Brown University)
Hyung Won Chung (MIT)

Incoming Google AI Resident (2019)

Yi Tay (Google)
Denny Zhou (Google Brain)
Quoc Le (Google Brain)
Barret Zoph (Google)
Jason Wei (OpenAI)
Adam Roberts (Google DeepMind)

More from the Same Authors