Timezone: »
Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and often rely on adversarial training, which is computationally costly. We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. Specifically, while existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives.
Author Information
Panagiotis Eustratiadis (University of Edinburgh)
Henry Gouk (University of Edinburgh)
Da Li (Samsung)
Timothy Hospedales (Samsung AI Centre / University of Edinburgh)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Weight-covariance alignment for adversarially robust neural networks »
Wed. Jul 21st 01:20 -- 01:25 AM Room
More from the Same Authors
-
2022 : Attacking Adversarial Defences by Smoothing the Loss Landscape »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2022 : HyperInvariances: Amortizing Invariance Learning »
Ruchika Chavhan · Henry Gouk · Jan Stuehmer · Timothy Hospedales -
2022 : Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention »
Ondrej Bohdal · Da Li · Xu Hu · Timothy Hospedales -
2023 : Impact of Noise on Calibration and Generalisation of Neural Networks »
Martin Ferianc · Ondrej Bohdal · Timothy Hospedales · Miguel Rodrigues -
2023 : Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose? »
LuĂsa Shimabucoro · Timothy Hospedales · Henry Gouk -
2023 : Why Do Self-Supervised Models Transfer? On Data Augmentation and Feature Properties »
Linus Ericsson · Henry Gouk · Timothy Hospedales -
2022 Poster: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2022 Poster: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2019 Poster: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Oral: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Poster: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales -
2019 Oral: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales