Timezone: »

 
Oral
Specializing Smaller Language Models towards Multi-Step Reasoning
Yao Fu · Hao Peng · Litu Ou · Ashish Sabharwal · Tushar Khot

Wed Jul 26 07:16 PM -- 07:24 PM (PDT) @ Meeting Room 313

The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models. We show that such abilities can, in fact, be distilled down from GPT-3.5 (≥ 175B) to T5 variants (≤ 11B). We propose model specialization, to specialize the model’s ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power such that they can perform a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we specialize their capacity towards a target task, the model can achieve decent performance improvements. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1) balancing language model’s performance on multiple tasks is a delicate matter, as improvements on one task may compromise other tasks; (2) yet by intentionally paying the price of decreased generic ability, we can clearly improve across different model scales smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization, including the data format mixture and the start model checkpoint. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research paradigm set by LLMs.

Author Information

Yao Fu (University of Edinburgh)
Hao Peng (Allen Institute for Artificial Intelligence)
Litu Ou (University of Edinburgh)

Year 3 student in UoE.

Ashish Sabharwal (Allen Institute for AI)
Tushar Khot (Allen Institute for AI)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors