Timezone: »
Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.
Author Information
Michael Hutchinson (University of Oxford)
Charline Le Lan (University of Oxford)
Sheheryar Zaidi (University of Oxford)
Emilien Dupont (University of Oxford)
Yee-Whye Teh (Oxford and DeepMind)
Hyunjik Kim (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: LieTransformer: Equivariant Self-Attention for Lie Groups »
Tue. Jul 20th 02:25 -- 02:30 PM Room
More from the Same Authors
-
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Tim G. J. Rudner · Freddie Bickford Smith · Qixuan Feng · Yee-Whye Teh · Yarin Gal -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2023 : Synthetic Experience Replay »
Cong Lu · Philip Ball · Yee-Whye Teh · Jack Parker-Holder -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions »
Leo Klarner · Tim G. J. Rudner · Michael Reutlinger · Torsten Schindler · Garrett Morris · Charlotte Deane · Yee-Whye Teh -
2022 Poster: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Spotlight: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Poster: From data to functa: Your data point is a function and you can treat it like one »
Emilien Dupont · Hyunjik Kim · S. M. Ali Eslami · Danilo J. Rezende · Dan Rosenbaum -
2022 Spotlight: From data to functa: Your data point is a function and you can treat it like one »
Emilien Dupont · Hyunjik Kim · S. M. Ali Eslami · Danilo J. Rezende · Dan Rosenbaum -
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Yarin Gal · Yee-Whye Teh · Qixuan Feng · Freddie Bickford Smith · Tim G. J. Rudner -
2021 Poster: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Spotlight: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: The Lipschitz Constant of Self-Attention »
Hyunjik Kim · George Papamakarios · Andriy Mnih -
2021 Spotlight: The Lipschitz Constant of Self-Attention »
Hyunjik Kim · George Papamakarios · Andriy Mnih -
2021 Poster: Provably Strict Generalisation Benefit for Equivariant Models »
Bryn Elesedy · Sheheryar Zaidi -
2021 Spotlight: Provably Strict Generalisation Benefit for Equivariant Models »
Bryn Elesedy · Sheheryar Zaidi -
2020 : Invited talk 3: Representational limitations of invertible models »
Emilien Dupont -
2020 : Contributed Talk 5: Neural Ensemble Search for Performant and Calibrated Predictions »
Sheheryar Zaidi -
2020 Poster: MetaFun: Meta-Learning with Iterative Functional Updates »
Jin Xu · Jean-Francois Ton · Hyunjik Kim · Adam Kosiorek · Yee-Whye Teh -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: Equivariant Neural Rendering »
Emilien Dupont · Miguel Angel Bautista Martin · Alex Colburn · Aditya Sankar · Joshua M Susskind · Qi Shan -
2020 Poster: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise »
Umut Simsekli · Lingjiong Zhu · Yee-Whye Teh · Mert Gurbuzbalaban -
2020 Poster: Uncertainty Estimation Using a Single Deep Deterministic Neural Network »
Joost van Amersfoort · Lewis Smith · Yee-Whye Teh · Yarin Gal -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Disentangling by Factorising »
Hyunjik Kim · Andriy Mnih -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Disentangling by Factorising »
Hyunjik Kim · Andriy Mnih -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami