Skip to yearly menu bar Skip to main content


Poster

Efficient Contrastive Learning for Fast and Accurate Inference on Graphs

Teng Xiao · Huaisheng Zhu · Zhiwei Zhang · Zhimeng Guo · Charu Aggarwal · Suhang Wang · Vasant Honavar


Abstract:

Graph contrastive learning has made remarkable advances in settings where there is a scarcity of task-specific labels. Despite these advances, the significant computational overhead for representation inference incurred by existing methods that rely on intensive message passing makes them unsuitable for latency-constrained applications. To address this problem, we present GraphECL, a simple and efficient contrastive learning for fast inference on graphs. GraphECL does away with the need for expensive message passing during inference. Specifically, it introduces a novel coupling of the MLP and GNN models, where the former learns to computationally efficiently mimic the computations performed by the latter. We provide a theoretical analysis showing why MLP can capture essential structural information in neighbors well enough to match the performance of GNN in downstream tasks. We present results of extensive experiments on widely used real-world benchmarks that show that GraphECL achieves superior performance and inference efficiency compared to state-of-the-art graph constrastive learning (GCL) methods on homophilous and heterophilous graphs. On large-scale graphs, such as Snap-patents and Ogbn-papers100M, GraphECL is 200.00x faster than current methods.

Live content is unavailable. Log in and register to view live content