Skip to yearly menu bar Skip to main content

Poster Teaser
Workshop: Graph Representation Learning and Beyond (GRL+)

(#92 / Sess. 1) From Graph Low-Rank Global Attention to 2-FWL Approximation

Omri Puny


Graph Neural Networks are known to have an expressive power bounded by that of the vertex coloring algorithm \cite{xu2018how, morris2018weisfeiler}. However, for rich node features, such a bound does not exist and GNNs can be shown to be universal. Unfortunately, expressive power alone does not imply good generalization. We suggest the Low-Rank Global Attention (LRGA) module, taking advantage of the efficiency of low rank matrix-vector multiplication, that improves the algorithmic alignment \cite{Xu2019algoalign} of GNNs with the more powerful 2-folklore Weisfeiler-Lehman (FWL) algorithm. Furthermore, we provide a sample complexity bound for the module using kernel feature map interpretation of 2-FWL. Empirically, augmenting existing GNN layers with LRGA produces state of the art results on most datasets in a GNN standard benchmark.

Teaser video |

Chat is not available.