Understanding Truncated Positional Encodings for Graph Neural Networks
James Flora ⋅ Mitchell Black ⋅ Weng-Keen Wong ⋅ Amir Nayyeri
Abstract
Positional encodings (PEs) enhance the power of graph neural networks (GNNs), both theoretically and empirically. Two of the most popular families of PEs---spectral (e.g., Laplacian eigenspaces, effective resistance) and random walk (polynomials of the adjacency matrix)---are theoretically equivalent in expressive power, and both are known to lie between the 1-WL and 3-WL tests in terms of expressivity. However, this equivalence assumes the GNN uses the "complete'' version of these PEs, which requires $O(n^3)$ time and space complexity. Practitioners therefore commonly use truncated variants of these encodings (e.g., the first $k$ eigenspaces or powers of adjacency matrix). However, the theoretical properties of these truncated PEs are unknown. In this work, we initiate the study of these truncated PEs. Theoretically, we show that, under truncation, several families of PEs are fundamentally different in expressive power. As a corollary, we show that truncated spectral PEs are no longer stronger than the 1-WL test. We also study a family of spectral PEs, the $k$-harmonic distances, to highlight the differences in expressive power of even closely related truncated PEs. Finally, we experimentally show that a mix of truncated PEs is preferable to any single family on real-world datasets.
Successful Page Load