Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

A Simple and Yet Fairly Effective Defense for Graph Neural Networks

Sofiane ENNADIR · Yassine Abbahaddou · Michalis Vazirgiannis · Henrik Boström

Keywords: [ Graph Neural Networks ] [ Graph Neural Network Robustness ] [ Node classification ]


Abstract:

Graph neural networks (GNNs) have become the standard approach for performing machine learning on graphs. However, concerns have been raised regarding their vulnerability to small adversarial perturbations. Existing defense methods suffer from high time complexity and can negatively impact the model's performance on clean graphs. In this paper, we propose NoisyGCN, a defense method that injects noise into the GCN architecture. We derive a mathematical upper bound linking GCN's robustness to noise injection, establishing our method's effectiveness. Through empirical evaluations on the node classification task, we demonstrate superior or comparable performance to existing methods while minimizing the added time complexity.

Chat is not available.