Revisiting Pre-Propagation GNNs: Robust Diffusion Operators and Hidden-State Re-Propagation
Abstract
Pre-propagation graph neural networks (PP-GNNs) decouple node feature propagation from transformation: graph diffusion is performed once as preprocessing, and training reduces to dense per-node transformations. This design enables mini-batch training without inter-node dependencies, avoids repeated sparse matrix--matrix multiplications, and better matches modern accelerators optimized for dense compute. However, their expressivity remains unclear, and empirical results show a gap between PP-GNNs and their message-passing counterparts on commonly used graph benchmarks, especially heterophilic ones. In this paper, we propose a suite of robust graph diffusion operators for preprocessing and a few-shot hidden-state re-propagation scheme during training. Our methods improve the validation and test accuracy of PP-GNNs, enabling them to match the accuracy of message-passing GNNs while maintaining training efficiency.