Timezone: »
Oral
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann · Sebastian Lunz · Peter Maass · Carola-Bibiane Schönlieb
Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behaviour by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows, so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the nonlinear nature of neural networks weakens the relation.
Author Information
Christian Etmann (University of Bremen)
Sebastian Lunz (University of Cambridge)
Peter Maass (University of Bremen)
Carola-Bibiane Schönlieb (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: On the Connection Between Adversarial Robustness and Saliency Map Interpretability »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom
More from the Same Authors
-
2021 : Invited talk 1 - Lessons from the Pandemic for Machine Learning and Medical Imaging »
Workshop CompBio · Carola-Bibiane Schönlieb · Michael Roberts -
2020 Poster: Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems »
Kaixuan Wei · Angelica I Aviles-Rivero · Jingwei Liang · Ying Fu · Carola-Bibiane Schönlieb · Hua Huang -
2018 Oral: Local Convergence Properties of SAGA/Prox-SVRG and Acceleration »
Clarice Poon · Jingwei Liang · Carola-Bibiane Schönlieb -
2018 Poster: Local Convergence Properties of SAGA/Prox-SVRG and Acceleration »
Clarice Poon · Jingwei Liang · Carola-Bibiane Schönlieb