Timezone: »
We introduce SignNet and BasisNet—new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if v is an eigenvector then so is −v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that our networks are universal, i.e., they can approximate any continuous function of eigenvectors with proper invariances. When used with Laplacian eigenvectors, our architectures are also theoretically expressive for graph representation learning, in that they can approximate any spectral graph convolution, can compute spectral invariants that go beyond message passing neural networks, and can provably simulate previously proposed graph positional encodings. Experiments show the strength of our networks for processing geometric data, in tasks including: molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNetBasisNet.
Author Information
Derek Lim (MIT)
Joshua Robinson (MIT)
I want to understand how machines can learn useful representations of the world. I am also interested in modeling diversity and its many applications in learning problems. I am Josh Robinson, a PhD student at MIT CSAIL & LIDS advised by Stefanie Jegelka and Suvrit Sra. I am part of the MIT machine learning group. Previously I was an undergraduate at the University of Warwick where I worked with Robert MacKay on probability theory.
More from the Same Authors

2023 : Learning Structured Representations with Equivariant Contrastive Learning »
Sharut Gupta · Joshua Robinson · Derek Lim · Soledad Villar · Stefanie Jegelka 
2023 : Expressive Sign Equivariant Networks for Spectral Geometric Learning »
Derek Lim · Joshua Robinson · Stefanie Jegelka · Haggai Maron 
2023 : Positional Encodings as Group Representations: A Unified Framework »
Derek Lim · Hannah Lawrence · Ningyuan Huang · Erik Thiede 
2023 Oral: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman 
2023 Poster: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman 
2023 Poster: Graph Inductive Biases in Transformers without Message Passing »
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim 
2022 : Sign and Basis Invariant Networks for Spectral Graph Representation Learning »
Derek Lim · Joshua Robinson · Lingxiao Zhao · Tess Smidt · Suvrit Sra · Haggai Maron · Stefanie Jegelka 
2022 : The Power of Recursion in Graph Neural Networks for Counting Substructures »
Behrooz Tahmasebi · Derek Lim · Stefanie Jegelka 
2022 Poster: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele 
2022 Spotlight: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele 
2020 Poster: Strength from Weakness: Fast Learning Using Weak Supervision »
Joshua Robinson · Stefanie Jegelka · Suvrit Sra