Timezone: »
Many recent studies have highlighted the susceptibility of virtually all machine-learning models to adversarial attacks. Adversarial attacks are imperceptible changes to an input example of a given prediction model. Such changes are carefully designed to alter the otherwise correct prediction of the model. In this paper, we study the generalization properties of adversarial learning. In particular, we derive high-probability generalization bounds on the adversarial risk in terms of the empirical adversarial risk, the complexity of the function class and the adversarial noise set. Our bounds are generally applicable to many models, losses, and adversaries. We showcase its applicability by deriving adversarial generalization bounds for the multi-class classification setting and various prediction models (including linear models and Deep Neural Networks). We also derive optimistic adversarial generalization bounds for the case of smooth losses. These are the first fast-rate bounds valid for adversarial deep learning to the best of our knowledge.
Author Information
Waleed Mustafa (TU Kaiserslautern)
Yunwen Lei (University of Birmingham)
Marius Kloft (TU Kaiserslautern)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: On the Generalization Analysis of Adversarial Learning »
Tue. Jul 19th 03:35 -- 03:40 PM Room Hall G
More from the Same Authors
-
2023 : Computing non-vacuous PAC-Bayes generalization bounds for Models under Adversarial Corruptions »
Waleed Mustafa · Philipp Liznerski · Dennis Wagner · Puyu Wang · Marius Kloft -
2023 Poster: Deep Anomaly Detection under Labeling Budget Constraints »
Aodong Li · Chen Qiu · Marius Kloft · Padhraic Smyth · Stephan Mandt · Maja Rudolph -
2023 Poster: Generalization Analysis for Contrastive Representation Learning »
Yunwen Lei · Tianbao Yang · Yiming Ying · Ding-Xuan Zhou -
2023 Poster: Training Normalizing Flows from Dependent Data »
Matthias Kirchler · Christoph Lippert · Marius Kloft -
2022 Poster: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Spotlight: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2021 Poster: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2021 Spotlight: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2021 Poster: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Oral: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2020 Poster: Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent »
Yunwen Lei · Yiming Ying -
2018 Poster: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2018 Oral: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft