Unitary Convolutions for Message-passing and Positional Encodings on Directed Graphs
Abstract
In many real-world networks, relationships are inherently directional, yet most graph neural networks (GNNs) assume undirected edges, and naïve adaptations of undirected GNNs to directed graphs amplify oversmoothing and gradient pathologies that cap model depth. Unitary graph convolutions (UniConv) provably prevent representational collapse and oversmoothing, but cannot incorporate edge directionality or edge features. In this paper, we introduce a directed unitary GNN with edge features (Dune), which retains these guarantees while overcoming UniConv’s limitations by incorporating edge directionality and edge features. Dune keeps gradient norms bounded at any number of layers, allowing it to benefit from neural network depth, unlike existing directed GNNs. The same unitary operator can be embedded in hybrid architectures with graph transformers, where its wavelike propagation supplies positional information and reduces the importance of random-walk or Laplacian-based encodings. We prove that Dune avoids exponential oversmoothing that plagues existing directed GNNs and empirically show that it achieves state-of-the-art performance on 12 directed-graph benchmarks while remaining trainable beyond 100 layers, improving performance by up to 18 percentage points over strong baselines. Our results establish unitary convolutions as a scalable, geometry-aware foundation for deep learning on directed graphs.