From Distribution to Geometry: Stable Graph Generalization via Invariant Barycenters
Abstract
Graph neural networks (GNNs) excel in graph analyzing tasks but often suffer from poor generalization under Out-of-Distribution (OOD) environments. Although this problem has attracted increasing attention, most solutions primarily rely on empirical designs, lacking effective mechanisms to characterize and quantify invariance for graph representation learning. To address these limitations, we propose DIGL, a novel graph learning method that improves the OOD generalization of GNNs. Our work makes an initial attempt to geometrize invariance for graphs by introducing computational optimal transport (OT) theory to characterize invariance principle. Specifically, we formulate the underlying invariant prototype shared by graphs across different environments as a distribution barycenter, and consider graph representations in each specific environment as distortions of the prototype. Building on this idea, we establish an invariant learning framework to promote the model to learn purely invariant graph representations for downstream tasks. Moreover, we derive a unified optimization objective for model implementation and provide theoretical analysis to justify our method. Extensive experiments on a broad range of benchmark datasets demonstrate the superior generalization ability of our method compared with baseline methods under various OOD settings.