DiP-G: Discrete Prompting for Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) are increasingly adopting the "pre-training, adaptation" paradigm, which first pre-train GNNs on large-scale unlabeled graph data and then adapt them to specific downstream tasks. As a common pattern, graph prompting adapts to the frozen encoder by modifying the input graph structure, rather than fine-tuning the model parameters. However, most existing graph prompting approaches optimize the continuous and weighted adjacency structure in the adaptation phase, while requiring a hard discretization at inference time. This difference causes a train-test mismatch which is particularly harmful in few-shot regimes. To address the issue, we propose Discrete Prompting for Graphs, a discrete prompting framework that directly learns task-specific topology prompts in the combinatorial space. DiP-G operates on multi-hop local candidate subgraphs to ensure scalability, generates hard (k)-sparse prompts through a perturbed Top-(k) solver, and optimizes the discrete structures using an I-MLE gradient estimator. To improve the efficiency of backward pass, we further introduce an adaptive active-set screening rule that accelerates the target solve and can provably maintain the accuracy of the solution. Extensive experiments conducted on multiple benchmark datasets have validated the effectiveness of our proposed method. Our main code is available in the supplementary materials.