Timezone: »
The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a \textit{resource-bounded} adversary so that it can be \textit{severely detrimental} to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers.
We suggest a formal answer for losses that satisfy the minimal statistical requirement of being \textit{proper}. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of ``harmfulness'' which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for \textit{all} proper losses, and for a popular subset of these, the optimisation of this central measure appears to be \textit{independent of the loss}. When classifiers are Lipschitz -- a now popular approach in adversarial training --, this optimisation resorts to \textit{optimal transport} to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind \textit{improves} generalization.
Author Information
Zac Cranko (ANU)
Aditya Menon (Google Research)
Richard Nock (Data61, The Australian National University and the University of Sydney)
Cheng Soon Ong (Data61 and ANU)

I am a senior principal research scientist at the Machine Learning Research Group, Data61, CSIRO, and the director of the machine learning and artificial intelligence future science platform at CSIRO. I am also an adjunct associate professor at the Australian National University. I am interested in enabling scientific discovery by extending statistical machine learning methods. In recent years, we have developed new optimisation methods for solving problems such as ranking, feature selection and experimental design, with the aim of solving scientific questions in collaboration with experts in other fields. This has included diverse problems in genomics, systems biology, and astronomy. I advocate strongly for open science, as well as diversity and inclusion.
Zhan Shi (University of Illinois at Chicago)
Christian Walder (Data61, the Australian National University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Monge blunts Bayes: Hardness Results for Adversarial Training »
Wed. Jun 12th 09:00 -- 09:20 PM Room Room 103
More from the Same Authors
-
2022 Poster: In defense of dual-encoders for neural ranking »
Aditya Menon · Sadeep Jayasumana · Ankit Singh Rawat · Seungyeon Kim · Sashank Jakkam Reddi · Sanjiv Kumar -
2022 Spotlight: In defense of dual-encoders for neural ranking »
Aditya Menon · Sadeep Jayasumana · Ankit Singh Rawat · Seungyeon Kim · Sashank Jakkam Reddi · Sanjiv Kumar -
2021 Poster: Quantile Bandits for Best Arms Identification »
Mengyan Zhang · Cheng Soon Ong -
2021 Spotlight: Quantile Bandits for Best Arms Identification »
Mengyan Zhang · Cheng Soon Ong -
2021 Poster: A statistical perspective on distillation »
Aditya Menon · Ankit Singh Rawat · Sashank Jakkam Reddi · Seungyeon Kim · Sanjiv Kumar -
2021 Poster: Disentangling Sampling and Labeling Bias for Learning in Large-output Spaces »
Ankit Singh Rawat · Aditya Menon · Wittawat Jitkrittum · Sadeep Jayasumana · Felix Xinnan Yu · Sashank Jakkam Reddi · Sanjiv Kumar -
2021 Poster: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: A statistical perspective on distillation »
Aditya Menon · Ankit Singh Rawat · Sashank Jakkam Reddi · Seungyeon Kim · Sanjiv Kumar -
2021 Spotlight: Disentangling Sampling and Labeling Bias for Learning in Large-output Spaces »
Ankit Singh Rawat · Aditya Menon · Wittawat Jitkrittum · Sadeep Jayasumana · Felix Xinnan Yu · Sashank Jakkam Reddi · Sanjiv Kumar -
2020 Poster: Does label smoothing mitigate label noise? »
Michal Lukasik · Srinadh Bhojanapalli · Aditya Menon · Sanjiv Kumar -
2020 Poster: Supervised learning: no loss no cry »
Richard Nock · Aditya Menon -
2020 Poster: Federated Learning with Only Positive Labels »
Felix Xinnan Yu · Ankit Singh Rawat · Aditya Menon · Sanjiv Kumar -
2019 Poster: Fairness risk measures »
Robert C Williamson · Aditya Menon -
2019 Oral: Fairness risk measures »
Robert C Williamson · Aditya Menon -
2019 Poster: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Oral: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Poster: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2019 Oral: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2018 Poster: Neural Dynamic Programming for Musical Self Similarity »
Christian Walder · Dongwoo Kim -
2018 Poster: Inductive Two-Layer Modeling with Parametric Bregman Transfer »
Vignesh Ganapathiraman · Zhan Shi · Xinhua Zhang · Yaoliang Yu -
2018 Poster: Self-Bounded Prediction Suffix Tree via Approximate String Matching »
Dongwoo Kim · Christian Walder -
2018 Oral: Self-Bounded Prediction Suffix Tree via Approximate String Matching »
Dongwoo Kim · Christian Walder -
2018 Oral: Neural Dynamic Programming for Musical Self Similarity »
Christian Walder · Dongwoo Kim -
2018 Oral: Inductive Two-Layer Modeling with Parametric Bregman Transfer »
Vignesh Ganapathiraman · Zhan Shi · Xinhua Zhang · Yaoliang Yu -
2018 Poster: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2018 Oral: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2017 Workshop: Human in the Loop Machine Learning »
Richard Nock · Cheng Soon Ong -
2017 Poster: Fast Bayesian Intensity Estimation for the Permanental Process »
Christian Walder · Adrian N Bishop -
2017 Talk: Fast Bayesian Intensity Estimation for the Permanental Process »
Christian Walder · Adrian N Bishop