Skip to yearly menu bar Skip to main content


Poster Teaser
in
Workshop: Graph Representation Learning and Beyond (GRL+)

(#80 / Sess. 2) Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings

Christopher Morris


Abstract:

The 1-dimensional Weisfeiler-Leman (WL) algorithm has recently emerged as a powerful tool for analysis and design of kernels and neural architectures for (supervised) learning with graphs. However, due to the purely local nature of the algorithms, they might miss essential patterns. Here, the k-dimensional WL accounts for the higher-order interactions between vertices by defining a suitable notion of adjacency between k-tuples of vertices. However, it does not scale and may suffer from overfitting when used in a machine learning setting. We circumvent these issues by proposing new local variants of the k-WL and corresponding neural architectures, which consider a subset of the original neighborhood. The expressive power of (one of) our algorithms is strictly higher than the original k-WL with no losses in scalability. In fact, for sparse graphs, the scalability improves by several orders of magnitude. The kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets, while the neural version shows promising performance on large-scale molecular regression tasks.

Teaser video |

Chat is not available.