Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Principled Approaches to Deep Learning

Contributed Presentation 5 - Towards Deep Learning Models Resistant to Adversarial Attacks

[ ]
2017 Talk

Abstract:

Towards Deep Learning Models Resistant to Adversarial Attacks

Aleksander MÄ…dry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu

Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify general methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. These methods let us train networks with significantly improved resistance to a wide range adversarial attacks. This suggests that adversarially resistant deep learning models might be within our reach after all.

Chat is not available.