Skip to yearly menu bar Skip to main content


Poster

Adversarial Attack on Graph Structured Data

Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song

Hall B #53

Abstract:

Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.

Live content is unavailable. Log in and register to view live content