Test-Time Graph Neural Dataset Search With Generative Projection
Abstract
In this work, we address the test-time adaptation challenge in graph neural networks (GNNs), focusing on overcoming the limitations in flexibility and generalization inherent in existing data-centric approaches. To this end, we propose a novel research problem, test-time graph neural dataset search, which seeks to learn a parameterized test-time graph distribution to enhance the inference performance of unseen test graphs on well-trained GNNs. Specifically, we propose a generative Projection based test-time Graph Neural Dataset Search method, named PGNDS, which maps the unseen test graph distribution back to the known training distribution through a generation process guided by well-trained GNNs. The proposed PGNDS framework consists of three key modules: (1) dual conditional diffusion for GNN-guided generative projection through test-back-to-training distribution mapping; (2) dynamic search from the generative sampling space to select the most expressive test graphs; (3) ensemble inference to aggregate information from original and adapted test graphs. Extensive experiments on real-world graphs demonstrate the superior ability of our proposed PGNDS for improved test-time GNN inference.
Lay Summary
Graph neural networks (GNNs) often struggle to perform well when applied to new, unseen graphs due to distribution shifts in the data. Our paper introduces PGNDS, a new method that adapts GNNs at test time without modifying the model. Rather than fine-tuning the model, PGNDS automatically transforms new graphs to resemble the training data, enabling better performance on shifted distributions. This improves prediction accuracy across tasks and provides a practical, data-centric solution for deploying GNNs in dynamic, real-world scenarios.