Size-Invariant Graph Representations for Graph Classification Extrapolations

Beatrice Bevilacqua · Yangze Zhou · Bruno Ribeiro


Keywords: [ Networks and Relational Learning ] [ Algorithms ]

[ Abstract ]
[ Slides
[ Paper ]
[ Visit Poster at Spot B2 in Virtual World ]
Tue 20 Jul 9 a.m. PDT — 11 a.m. PDT
Oral presentation: Graph Learning
Tue 20 Jul 5 a.m. PDT — 6 a.m. PDT


In general, graph representation learning methods assume that the train and test data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and test data have different distributions, with test data unavailable during training. Our work shows it is possible to use a causal model to learn approximately invariant representations that better extrapolate between train and test data. Finally, we conclude with synthetic and real-world dataset experiments showcasing the benefits of representations that are invariant to train/test distribution shifts.

Chat is not available.