CCLRec: Consensus-driven Contrastive Learning for LLM-enhanced Graph Recommendation
Abstract
Recommendation systems seek to accurately model user preferences from a large set of candidate items. Graph neural networks (GNNs) have emerged as a dominant approach in this domain due to their ability to capture high-order user–item interactions. Recent efforts have aimed to enhance GNN-based representation learning by incorporating the semantic reasoning capabilities of large language models (LLMs). However, existing methods often process graph structural information and LLM-derived semantic knowledge separately, creating a supervisory gap between structural proximity and semantic relevance. To bridge this gap, we propose CCLRec, a consensus-driven contrastive learning framework for recommendation. CCLRec deeply integrates structural and semantic information by identifying consistent signals. Specifically, we first use an LLM to extract semantic representations of items and to sample candidate positive/negative sets in the semantic space. We then introduce a structural–semantic consensus mining strategy that computes the intersection between a node’s structural neighbors in the graph and its semantically similar items. This allows us to identify high-confidence positive pairs endorsed by both collaborative filtering patterns and LLM-based reasoning. By centering contrastive learning on these consensus pairs and applying a weight-aware reinforcement mechanism during training, CCLRec significantly amplifies the contribution of high-quality consensus features during training. Experiments across multiple public benchmarks show that CCLRec consistently outperforms state-of-the-art methods on key metrics, demonstrating the effectiveness of our consensus-aware design.