Poster
Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence
Gouki Minegishi · Hiroki Furuta · Shohei Taniguchi · Yusuke Iwasawa · Yutaka Matsuo
East Exhibition Hall A-B #E-1208
Transformer-based language models exhibit In-Context Learning (ICL), where predictions are made adaptively based on context. While prior work links induction heads to ICL through a sudden jump in accuracy, this can only account for ICL when the answer is included within the context.However, an important property of practical ICL in large language models is the ability to meta-learn how to solve tasks from context, rather than just copying answers from context; how such an ability is obtained during training is largely unexplored.In this paper, we experimentally clarify how such meta-learning ability is acquired by analyzing the dynamics of the model's circuit during training. Specifically, we extend the copy task from previous research into an In-Context Meta Learning setting, where models must infer a task from examples to answer queries.Interestingly, in this setting, we find that there are multiple phases in the process of acquiring such abilities, and that a unique circuit emerges in each phase, contrasting with the single-phases change in induction heads. The emergence of such circuits can be related to several phenomena known in large language models, and our analysis lead to a deeper understanding of the source of the transformer's ICL ability.
Transformer-based language models, such as ChatGPT, have an impressive ability called In-Context Learning (ICL), where they adaptively answer questions based on examples provided in the context. Prior research explained this behavior mainly when models directly copy answers given explicitly in the context. However, real-world scenarios often require these models to learn how to solve new tasks just by seeing a few examples, without directly copying the answers.In this study, we explored how this more general and practical form of learning emerges during the training of these models. Through careful experiments, we discovered that the learning process happens in multiple stages. At each stage, the model forms specialized internal structures or "circuits" that help it progressively better understand and solve new tasks.Our findings suggest that the ability of transformers to quickly adapt to unseen tasks stems from this multi-stage development of internal circuits. Understanding these phases provides valuable insight into why modern language models like ChatGPT perform so well on diverse tasks, moving us closer to fully unlocking their potential and enhancing their reliability in everyday applications.
Live content is unavailable. Log in and register to view live content