Dual-channel Dynamic Graph Neural Networks with Adaptive Adjacency Learning and Multi-scale Representation Fusion
Abstract
Graph neural networks (GNNs) have been demonstrated to be powerful tools for analyzing structural graph data. However, most existing methods usually rely on fixed adjacency structures for information propagation, lacking strong adaptability to the latent semantic relationships that exist but are not explicitly connected in graph, especially in complementary high-pass and low-pass filtering views. To this end, this paper proposes a novel Dual-channel Dynamic Graph Neural Network (DCD-GNN), mainly consisting of parallel representation learning channels: a static structure-preserving channel and a dynamic adjacency-enhancing channel. The dynamic channel exploits both low-pass structural filtering and high-pass personalized detail via self-attention adjacency learning and then integrates them for comprehensive semantic modeling, while the static channel maintains structural stability. Both channels employ a multi-scale representation fusion mechanism and are finally fused into a unified and discriminative node embedding representation. Extensive experiments on various graph benchmark datasets verify the superiority of DCD-GNN in discriminative graph representation learning.