Skip to yearly menu bar Skip to main content


Poster

Graph Adversarial Diffusion Convolution via Laplacian Distance

Songtao Liu · Jinghui Chen · Tianfan Fu · Lu Lin · Marinka Zitnik · Dinghao Wu


Abstract:

Graph diffusion convolution (GDC) leverages the power of generalized graph diffusion, which has demonstrated impressive performance across many tasks. However, GDC can encounter issues when graph adversarial attacks compromise the graph structure or graph nodes have limited neighbors. To address these challenges, we draw inspiration from adversarial training, which can improve the robustness of neural networks, and propose the min-max optimization formulation of the Graph Signal Denoising (GSD) problem. In this formulation, the inner maximization problem aims to find a perturbation within the graph based on spectral distance. This introduces an additional perturbation term to the outer optimization problem, thereby increasing the loss function of the outer optimization function. By solving the outer optimization problem, we derive a new GNN architecture, Graph Adversarial Diffusion Convolution (GADC). This differs from GDC by integrating an additional term. When compared to GDC, the additional term in GADC improves robustness against graph adversarial attacks and large feature noise perturbations. Furthermore, it can enhance the performance of GDC in heterophilic graphs. Extensive experiments demonstrate the effectiveness of our GADC across diverse benchmarks, including node and link prediction tasks.

Live content is unavailable. Log in and register to view live content