Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Rethinking Label Poisoning for GNNs: Pitfalls and Attacks

Vijay Lingam · Mohammad Sadegh Akhondzadeh · Aleksandar Bojchevski

Keywords: [ Label Poisoning Attacks ] [ Graph Neural Networks ] [ robustness ]


Abstract: Node labels for graphs are usually generated using an automated process, or crowd-sourced from human users. This opens up avenues for malicious users to compromise the training labels, making it unwise to blindly rely on them. While robustness against noisy labels is an active area of research, there are only a handful of papers in the literature that address this for graph-based data. Even more so, the effects of adversarial label perturbations are sparsely studied. A recent work revealed that the entire literature on label poisoning for GNNs is plagued by serious evaluation pitfalls and showed how existing attacks render ineffective post fixing these shortcomings. In this work, we introduce two new simple yet effective attacks that are significantly stronger (up to $\sim8\%$) than the previous strongest attack. Our work demonstrates the need for more robust defense mechanisms, especially considering the \emph{transferability} of our attacks, where a strategy devised for one model can effectively contaminate numerous other models.

Chat is not available.