Skip to yearly menu bar Skip to main content


Poster Teaser
in
Workshop: Graph Representation Learning and Beyond (GRL+)

(#99 / Sess. 2) GraphNets with Spectral Message Passing

Kimberly Stachenfeld


Abstract:

Graph Neural Networks (GNNs) are the subject of intense focus by the machine learning community for problems involving relational reasoning. GNNs can be broadly divided into spatial and spectral approaches. Spatial approaches use a form of learned message-passing, in which interactions among vertices are computed locally, and information propagates over longer distances on the graph with greater numbers of message-passing steps. Spectral approaches use eigendecompositions of the graph Laplacian to produce a generalization of spatial convolutions to graph structured data which access information over short and long time scales simultaneously. Here we introduce a Spectral Graph Network, which applies message passing to both the spatial and spectral domains. Our model projects vertices of the spatial graph onto the Laplacian eigenvectors, which are each represented as vertices in a fully connected ``spectral graph'', and then applies learned message passing to them. We apply this model to various benchmark tasks including a sparse graph-version of MNIST image classification, molecular classification (MoleculeNet), and molecular property prediction (QM9). The Spectral GN promotes efficient training, reaching high performance with fewer training iterations despite having more parameters. The model also provides robustness to edge dropout and outperforms baselines for the classification tasks.

Teaser video |

Chat is not available.