CL-GCL: Comprehensive and Lightweight Graph Contrastive Learning
Abstract
Graph Contrastive Learning (GCL) has significantly advanced self-supervised representation learning on graphs, yet its practical efficacy remains hindered by random augmentations that induce semantic distortion and rigid one-to-one sampling strategy that amplifies inter-class entanglement and intra-class dispersion. To address these limitations, we develop CL-GCL, a Comprehensive and Lightweight Graph Contrastive Learning framework. Specifically, we exploit graph coarsening to preserve structural semantics through community-level representations and manifold learning to capture local geometric relations without costly pairwise distance computations. This design naturally aligns with the neighborhood aggregation principle of Graph Convolutional Networks, enhancing structural consistency while eliminating negative sampling bias. We theoretically prove that CL-GCL approximates node-level contrastive loss under mild conditions. Extensive experiments demonstrate consistent superiority in both accuracy and efficiency over state-of-the-art GCL methods.