Timezone: »
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input perturbation in the opposite direction of the adversarial one and feeds the classifier a perturbed version of the input. Our approach is training-free and theoretically supported. We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models, and conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100 and ImageNet. Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
Author Information
Motasem Alfarra (King Abullah University of Science and Technology (KAUST))
Juan C Perez (Universidad de los Andes)
Ali Thabet (KAUST)
Adel Bibi (KAUST)
Phil Torr (Oxford)
Bernard Ghanem (KAUST)
More from the Same Authors
-
2021 : Detecting and Quantifying Malicious Activity with Simulation-based Inference »
Andrew Gambardella · Naeemullah Khan · Phil Torr · Atilim Gunes Baydin -
2022 : Make Some Noise: Reliable and Efficient Single-Step Adversarial Training »
Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Phil Torr · Gregory Rogez · Puneet Dokania -
2022 : Catastrophic overfitting is a bug but also a feature »
Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr -
2022 : Illusionary Attacks on Sequential Decision Makers and Countermeasures »
Tim Franzmeyer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2022 : How robust are pre-trained models to distribution shift? »
Yuge Shi · Imant Daunhawer · Julia Vogt · Phil Torr · Amartya Sanyal -
2022 : How robust are pre-trained models to distribution shift? »
Yuge Shi · Imant Daunhawer · Julia Vogt · Phil Torr · Amartya Sanyal -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 Poster: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Spotlight: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Poster: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2021 Poster: Training Graph Neural Networks with 1000 Layers »
Guohao Li · Matthias Müller · Bernard Ghanem · Vladlen Koltun -
2021 Spotlight: Training Graph Neural Networks with 1000 Layers »
Guohao Li · Matthias Müller · Bernard Ghanem · Vladlen Koltun -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson