Timezone: »
While the successes of transformers across many domains are indisputable, accurate understanding of the learning mechanics is still largely lacking. Their capabilities have been probed on benchmarks which include a variety of structured and reasoning tasks---but mathematical understanding is lagging substantially behind. Recent lines of work have begun studying representational aspects of this question: that is, the size/depth/complexity of attention-based networks to perform certain tasks. However, there is no guarantee the learning dynamics will converge to the constructions proposed. In our paper, we provide fine-grained mechanistic understanding of how transformers learn ``semantic structure'', understood as capturing co-occurrence structure of words. Precisely, we show, through a combination of mathematical analysis and experiments on Wikipedia data and synthetic data modeled by Latent Dirichlet Allocation (LDA), that the embedding layer and the self-attention layer encode the topical structure. In the former case, this manifests as higher average inner product of embeddings between same-topic words. In the latter, it manifests as higher average pairwise attention between same-topic words. The mathematical results involve several assumptions to make the analysis tractable, which we verify on data, and might be of independent interest as well.
Author Information
Yuchen Li (Carnegie Mellon University)
Yuanzhi Li (CMU)
Andrej Risteski (CMU)
More from the Same Authors
-
2021 : The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders »
Divyansh Pareek · Andrej Risteski -
2021 : When Is Generalizable Reinforcement Learning Tractable? »
Dhruv Malik · Yuanzhi Li · Pradeep Ravikumar -
2021 : Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity »
Dhruv Malik · Aldo Pacchiano · Vishwak Srinivasan · Yuanzhi Li -
2021 : Towards understanding how momentum improves generalization in deep learning »
Samy Jelassi · Yuanzhi Li -
2023 : How Does Adaptive Optimization Impact Local Neural Network Geometry? »
Kaiqi Jiang · Dhruv Malik · Yuanzhi Li -
2023 : Characterizing and Improving Transformer Solutions for Dyck Grammars »
Kaiyue Wen · Yuchen Li · Bingbin Liu · Andrej Risteski -
2023 : Deep Equilibrium Based Neural Operators for Steady-State PDEs »
Tanya Marwah · Ashwini Pokle · Zico Kolter · Zachary Lipton · Jianfeng Lu · Andrej Risteski -
2023 : Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Markov Chains »
Yilong Qin · Andrej Risteski -
2023 : Plan, Eliminate, and Track --- Language Models are Good Teachers for Embodied Agents. »
Yue Wu · So Yeon Min · Yonatan Bisk · Ruslan Salakhutdinov · Amos Azaria · Yuanzhi Li · Tom Mitchell · Shrimai Prabhumoye -
2023 : SPRING: Studying Papers and Reasoning to play Games »
Yue Wu · Shrimai Prabhumoye · So Yeon Min · Yonatan Bisk · Ruslan Salakhutdinov · Amos Azaria · Tom Mitchell · Yuanzhi Li -
2023 : (Un)interpretability of Transformers: a case study with Dyck grammars »
Kaiyue Wen · Yuchen Li · Bingbin Liu · Andrej Risteski -
2023 : How Do Transformers Learn Topic Structure: Towards a Mechanistic Understanding »
Yuchen Li · Yuanzhi Li · Andrej Risteski -
2023 : Provable benefits of score matching »
Chirag Pabbaraju · Dhruv Rohatgi · Anish Sevekari · Holden Lee · Ankur Moitra · Andrej Risteski -
2023 : Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Markov Chains »
Yilong Qin · Andrej Risteski -
2023 : (Un)interpretability of Transformers: a case study with Dyck grammars »
Kaiyue Wen · Yuchen Li · Bingbin Liu · Andrej Risteski -
2023 : Provable benefits of score matching »
Andrej Risteski -
2023 Poster: Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality »
Dhruv Malik · Conor Igoe · Yuanzhi Li · Aarti Singh -
2023 Poster: Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective »
Tanya Marwah · Zachary Lipton · Jianfeng Lu · Andrej Risteski -
2023 Poster: The Benefits of Mixup for Feature Learning »
Difan Zou · Yuan Cao · Yuanzhi Li · Quanquan Gu -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2022 Poster: Towards understanding how momentum improves generalization in deep learning »
Samy Jelassi · Yuanzhi Li -
2022 Spotlight: Towards understanding how momentum improves generalization in deep learning »
Samy Jelassi · Yuanzhi Li -
2021 : Towards understanding how momentum improves generalization in deep learning »
Samy Jelassi · Yuanzhi Li -
2021 Poster: Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity »
Dhruv Malik · Aldo Pacchiano · Vishwak Srinivasan · Yuanzhi Li -
2021 Spotlight: Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity »
Dhruv Malik · Aldo Pacchiano · Vishwak Srinivasan · Yuanzhi Li -
2021 Poster: Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning »
Zixin Wen · Yuanzhi Li -
2021 Poster: Representational aspects of depth and conditioning in normalizing flows »
Frederic Koehler · Viraj Mehta · Andrej Risteski -
2021 Spotlight: Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning »
Zixin Wen · Yuanzhi Li -
2021 Spotlight: Representational aspects of depth and conditioning in normalizing flows »
Frederic Koehler · Viraj Mehta · Andrej Risteski -
2020 Poster: Empirical Study of the Benefits of Overparameterization in Learning Latent Variable Models »
Rares-Darius Buhai · Yoni Halpern · Yoon Kim · Andrej Risteski · David Sontag -
2020 Poster: On Learning Language-Invariant Representations for Universal Machine Translation »
Han Zhao · Junjie Hu · Andrej Risteski