Skip to yearly menu bar Skip to main content


Poster

Size-Invariant Graph Representations for Graph Classification Extrapolations

Beatrice Bevilacqua · Yangze Zhou · Bruno Ribeiro

Virtual

Keywords: [ Networks and Relational Learning ] [ Algorithms ]


Abstract:

In general, graph representation learning methods assume that the train and test data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and test data have different distributions, with test data unavailable during training. Our work shows it is possible to use a causal model to learn approximately invariant representations that better extrapolate between train and test data. Finally, we conclude with synthetic and real-world dataset experiments showcasing the benefits of representations that are invariant to train/test distribution shifts.

Chat is not available.