Timezone: »

 
Poster
Transformers as Algorithms: Generalization and Stability in In-context Learning
Yingcong Li · Muhammed Ildiz · Dimitris Papailiopoulos · Samet Oymak

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #218

In-context learning (ICL) is a type of prompting where a transformer model operates on a sequence of (input, output) examples and performs inference on-the-fly. In this work, we formalize in-context learning as an algorithm learning problem where a transformer model implicitly constructs a hypothesis function at inference-time. We first explore the statistical aspects of this abstraction through the lens of multitask learning: We obtain generalization bounds for ICL when the input prompt is (1) a sequence of i.i.d. (input, label) pairs or (2) a trajectory arising from a dynamical system. The crux of our analysis is relating the excess risk to the stability of the algorithm implemented by the transformer. We characterize when transformer/attention architecture provably obeys the stability condition and also provide empirical verification. For generalization on unseen tasks, we identify an inductive bias phenomenon in which the transfer learning risk is governed by the task complexity and the number of MTL tasks in a highly predictable manner. Finally, we provide numerical evaluations that (1) demonstrate transformers can indeed implement near-optimal algorithms on classical regression problems with i.i.d. and dynamic data, (2) provide insights on stability, and (3) verify our theoretical predictions.

Author Information

Yingcong Li (University of California, Riverside)
Muhammed Ildiz (University of California, Riverside)
Dimitris Papailiopoulos (University of Wisconsin-Madison)
Samet Oymak (University of Michigan - Ann Arbor)

More from the Same Authors