Timezone: »
Robustness to adversarial attacks is typically obtained through expensive adversarial training with Projected Gradient Descent. We introduce ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy. Our technique relies on the use of an Optical Processing Unit (OPU), a photonic co-processor, and a fine-tuning step performed with Direct Feedback Alignment, a synthetic gradient training scheme. We test our method on nine different models against four attacks in RobustBench, consistently improving over state-of-the-art performance. We also introduce phase retrieval attacks, specifically designed to target our own defense. We show that even with state-of-the-art phase retrieval techniques, ROPUST is effective.
Author Information
Alessandro Cappelli (LightOn)
Ruben Ohana (Ecole Normale Supérieure & LightOn)
Julien Launay (École Normale Supérieure)
Laurent Meunier (Paris Dauphine University / Facebook AI Research)
Iacopo Poli (LightOn)
More from the Same Authors
-
2023 Workshop: Workshop on Efficient Systems for Foundation Model »
Julien Launay · Daniel Y Fu · Tri Dao · Daniel Hesslow · Beidi Chen · Azalia Mirhoseini · Percy Liang -
2022 Poster: What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization? »
Thomas Wang · Adam Roberts · Daniel Hesslow · Teven Le Scao · Hyung Won Chung · Iz Beltagy · Julien Launay · Colin Raffel -
2022 Spotlight: What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization? »
Thomas Wang · Adam Roberts · Daniel Hesslow · Teven Le Scao · Hyung Won Chung · Iz Beltagy · Julien Launay · Colin Raffel -
2021 Poster: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt -
2021 Spotlight: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt