Size Transferability of Graph Convolutional Networks across Sparsity: A Generalized Graphon Perspective
Abstract
Size transfer scales Graph Convolutional Networks (GCNs) by applying models trained on sampled subgraphs to larger target graphs. However, existing theoretical guarantees are typically confined to dense graphs or restricted sparsity regimes, failing to cover the arbitrary sparsity of real-world networks. To bridge this gap, we introduce the Generalized Graphon Convolutional Network (GWCN) based on the generalized graphon theory. Unlike the classical graphon limit which vanishes in sparse settings, GWCN employs stretching to construct a non-trivial limit that preserves topological structure. We derive an explicit transfer error bound that decomposes into size-dependent and density-dependent components, providing a unified guarantee across arbitrary sparsity levels. Empirical results on real-world networks corroborate our findings, demonstrating that transfer error vanishes as graph size increases and edge density decreases.