Timezone: »

 
Poster
Monge blunts Bayes: Hardness Results for Adversarial Training
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder

Wed Jun 12 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #191

The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a \textit{resource-bounded} adversary so that it can be \textit{severely detrimental} to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers.

We suggest a formal answer for losses that satisfy the minimal statistical requirement of being \textit{proper}. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of ``harmfulness'' which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for \textit{all} proper losses, and for a popular subset of these, the optimisation of this central measure appears to be \textit{independent of the loss}. When classifiers are Lipschitz -- a now popular approach in adversarial training --, this optimisation resorts to \textit{optimal transport} to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind \textit{improves} generalization.

Author Information

Zac Cranko (ANU)
Aditya Menon (Google Research)
Richard Nock (Data61, The Australian National University and the University of Sydney)
Cheng Soon Ong (Data61 and ANU)
Cheng Soon Ong

I am a senior principal research scientist at the Machine Learning Research Group, Data61, CSIRO, and the director of the machine learning and artificial intelligence future science platform at CSIRO. I am also an adjunct associate professor at the Australian National University. I am interested in enabling scientific discovery by extending statistical machine learning methods. In recent years, we have developed new optimisation methods for solving problems such as ranking, feature selection and experimental design, with the aim of solving scientific questions in collaboration with experts in other fields. This has included diverse problems in genomics, systems biology, and astronomy. I advocate strongly for open science, as well as diversity and inclusion.

Zhan Shi (University of Illinois at Chicago)
Christian Walder (Data61, the Australian National University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors