Timezone: »
Self-attention, an architectural motif designed to model long-range interactions in sequential data, has driven numerous recent breakthroughs in natural language processing and beyond. This work provides a theoretical analysis of the inductive biases of self-attention modules. Our focus is to rigorously establish which functions and long-range dependencies self-attention blocks prefer to represent. Our main result shows that bounded-norm Transformer networks "create sparse variables": a single self-attention head can represent a sparse function of the input sequence, with sample complexity scaling only logarithmically with the context length. To support our analysis, we present synthetic experiments to probe the sample complexity of learning sparse Boolean functions with Transformers.
Author Information
Benjamin Edelman (Harvard University)
Surbhi Goel (Microsoft Research)
Sham Kakade (Harvard University)
Cyril Zhang (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Inductive Biases and Variable Creation in Self-Attention Mechanisms »
Tue. Jul 19th through Wed the 20th Room Hall E #1224
More from the Same Authors
-
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 : The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2023 : Exposing Attention Glitches with Flip-Flop Language Modeling »
Bingbin Liu · Jordan Ash · Surbhi Goel · Akshay Krishnamurthy · Cyril Zhang -
2023 : Predicting Task Forgetting in Large Language Models »
Anat Kleiman · Jonathan Frankle · Sham Kakade · Mansheej Paul -
2023 Poster: Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron »
Jingfeng Wu · Difan Zou · Zixiang Chen · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2023 Poster: On Provable Copyright Protection for Generative Models »
Nikhil Vyas · Sham Kakade · Boaz Barak -
2023 Poster: Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games »
Dylan Foster · Noah Golowich · Sham Kakade -
2022 Social: Mental Health in ML Academia »
Paula Gradu · Cyril Zhang -
2022 Poster: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Poster: Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2022 Poster: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Spotlight: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Oral: Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2022 Spotlight: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2021 Poster: Acceleration via Fractal Learning Rate Schedules »
Naman Agarwal · Surbhi Goel · Cyril Zhang -
2021 Spotlight: Acceleration via Fractal Learning Rate Schedules »
Naman Agarwal · Surbhi Goel · Cyril Zhang -
2020 Poster: Causal Strategic Linear Regression »
Yonadav Shavit · Benjamin Edelman · Brian Axelrod