Timezone: »

 
Poster
Graph Inductive Biases in Transformers without Message Passing
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #229

Transformers for graph data are increasingly widely studied and successful in numerous learning tasks. Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings. However, Graph Transformers that use message-passing inherit known issues of message-passing, and differ significantly from Transformers used in other domains, thus making transfer of research advances more difficult. On the other hand, Graph Transformers without message-passing often perform poorly on smaller datasets, where inductive biases are more crucial. To bridge this gap, we propose the Graph Inductive bias Transformer (GRIT) --- a new Graph Transformer that incorporates graph inductive biases without using message passing. GRIT is based on several architectural changes that are each theoretically and empirically justified, including: learned relative positional encodings initialized with random walk probabilities, a flexible attention mechanism that updates node and node-pair representations, and injection of degree information in each layer. We prove that GRIT is expressive --- it can express shortest path distances and various graph propagation matrices. GRIT achieves state-of-the-art empirical performance across a variety of graph datasets, thus showing the power that Graph Transformers without message-passing can deliver.

Author Information

Liheng Ma (McGill University)
Chen Lin (University of Oxford, University of Oxford)
Derek Lim (MIT)
Adriana Romero Soriano (Facebook AI Research)
Puneet Dokania (University of Oxford)
Mark Coates (McGill University)
Phil Torr (Oxford)
Ser Nam Lim (Meta AI/UCF)

More from the Same Authors