Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Topology, Algebra, and Geometry in Machine Learning

The Power of Recursion in Graph Neural Networks for Counting Substructures

Behrooz Tahmasebi · Derek Lim · Stefanie Jegelka


Abstract:

To achieve a graph representation, most Graph Neural Networks (GNNs) follow two steps: first, each graph is decomposed into a number of subgraphs (which we call the recursion step), and then the collection of subgraphs is encoded. While recently proposed higher-order networks show a remarkable increase in the expressive power through the idea of deeper neighborhoods (i.e., deeper recursion), the power of (deeper) recursion in GNNs is still not fully understood. In this paper, we study a fundamental question: what is the power of recursion by itself, without the use of any other pooling operation? To make it concrete, we consider a pure recursion-based GNN which we call Recursive Neighborhood Pooling GNN (RNP-GNN). The expressive power of an RNP-GNN and its computational cost quantifies the (purified) power of recursion for a graph representation network. We quantify the power by means of counting substructures which is one main limitation of the Message Passing graph Neural Networks (MPNNs), and show how they can exploit the sparsity of the underlying graph to achieve low-cost powerful representations. We also compare the recent lower bounds on the time complexity and show how recursive-based networks are near optimal.

Chat is not available.