Timezone: »
Poster
Parseval Networks: Improving Robustness to Adversarial Examples
Moustapha Cisse · Piotr Bojanowski · Edouard Grave · Yann Dauphin · Nicolas Usunier
We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than $1$. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most important feature of Parseval networks is to maintain weight matrices of linear and convolutional layers to be (approximately) Parseval tight frames, which are extensions of orthogonal matrices to non-square matrices. We describe how these constraints can be maintained efficiently during SGD. We show that Parseval networks match the state-of-the-art regarding accuracy on CIFAR-10/100 and Street View House Numbers (SVHN), while being more robust than their vanilla counterpart against adversarial examples. Incidentally, Parseval networks also tend to train faster and make a better usage of the full capacity of the networks.
Author Information
Moustapha Cisse (Facebook AI Research)
Piotr Bojanowski (Facebook)
Edouard Grave (Facebook AI Research)
Yann Dauphin (Facebook AI Research)
Nicolas Usunier (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Parseval Networks: Improving Robustness to Adversarial Examples »
Tue. Aug 8th 04:06 -- 04:24 AM Room Darling Harbour Theatre
More from the Same Authors
-
2022 Workshop: Responsible Decision Making in Dynamic Environments »
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier -
2022 Poster: Flashlight: Enabling Innovation in Tools for Machine Learning »
Jacob Kahn · Vineel Pratap · Tatiana Likhomanenko · Qiantong Xu · Awni Hannun · Jeff Cai · Paden Tomasello · Ann Lee · Edouard Grave · Gilad Avidov · Benoit Steiner · Vitaliy Liptchinsky · Gabriel Synnaeve · Ronan Collobert -
2022 Spotlight: Flashlight: Enabling Innovation in Tools for Machine Learning »
Jacob Kahn · Vineel Pratap · Tatiana Likhomanenko · Qiantong Xu · Awni Hannun · Jeff Cai · Paden Tomasello · Ann Lee · Edouard Grave · Gilad Avidov · Benoit Steiner · Vitaliy Liptchinsky · Gabriel Synnaeve · Ronan Collobert -
2020 Poster: Growing Action Spaces »
Gregory Farquhar · Laura Gustafson · Zeming Lin · Shimon Whiteson · Nicolas Usunier · Gabriel Synnaeve -
2020 Poster: Fully Parallel Hyperparameter Search: Reshaped Space-Filling »
Marie-Liesse Cauwet · Camille Couprie · Julien Dehos · Pauline Luc · Jeremy Rapin · Morgane Riviere · Fabien Teytaud · Olivier Teytaud · Nicolas Usunier -
2018 Poster: Optimizing the Latent Space of Generative Networks »
Piotr Bojanowski · Armand Joulin · David Lopez-Paz · Arthur Szlam -
2018 Oral: Optimizing the Latent Space of Generative Networks »
Piotr Bojanowski · Armand Joulin · David Lopez-Paz · Arthur Szlam -
2018 Poster: Canonical Tensor Decomposition for Knowledge Base Completion »
Timothee Lacroix · Nicolas Usunier · Guillaume Obozinski -
2018 Oral: Canonical Tensor Decomposition for Knowledge Base Completion »
Timothee Lacroix · Nicolas Usunier · Guillaume Obozinski -
2017 Workshop: Video Games and Machine Learning »
Gabriel Synnaeve · Julian Togelius · Tom Schaul · Oriol Vinyals · Nicolas Usunier -
2017 Poster: Convolutional Sequence to Sequence Learning »
Jonas Gehring · Michael Auli · David Grangier · Denis Yarats · Yann Dauphin -
2017 Poster: Language Modeling with Gated Convolutional Networks »
Yann Dauphin · Angela Fan · Michael Auli · David Grangier -
2017 Talk: Convolutional Sequence to Sequence Learning »
Jonas Gehring · Michael Auli · David Grangier · Denis Yarats · Yann Dauphin -
2017 Talk: Language Modeling with Gated Convolutional Networks »
Yann Dauphin · Angela Fan · Michael Auli · David Grangier -
2017 Poster: Unsupervised Learning by Predicting Noise »
Piotr Bojanowski · Armand Joulin -
2017 Talk: Unsupervised Learning by Predicting Noise »
Piotr Bojanowski · Armand Joulin