InvGNN: Learning Invertible Node Representations on Graphs
Abstract
Over the past decade, Graph Neural Networks (GNNs) have become a standard tool for solving machine learning problems on graphs. While many aspects of GNNs have been studied in depth, including their efficiency and expressive power, the invertibility of these models has remained largely unexplored. Standard aggregation functions, such as the mean, max and sum operators, are not invertible, which limits their applicability in tasks requiring invertible transformations. In this work, we introduce an invertible GNN layer. By stacking multiple such layers, we construct fully invertible GNN models, which we refer to as InvGNNs. These models inherit the benefits of invertible neural networks, including low memory usage for deep architectures, exact likelihood computation, and generative modeling capabilities. We demonstrate that InvGNNs can match the expressive power of the 1-dimensional Weisfeiler-Leman algorithm, showing that invertibility does not compromise model expressiveness. On standard graph classification benchmarks, our model performs comparably to other well-established GNNs, such as GIN. Beyond classification, we demonstrate the potential of invertible layers through density estimation tasks, including outlier detection and node feature generation. Our experiments confirm that InvGNNs effectively handle tasks that benefit from invertible layers.