Timezone: »

Adversarially trained neural representations are already as robust as biological neural representations
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #502

Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work,we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well-founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.

Author Information

Chong Guo (Massachusetts Institute of Technology)
Michael Lee (MIT)
Guillaume Leclerc (MIT)
Joel Dapello (Harvard University)
Yug Rao (Purdue University)
Aleksander Madry (MIT)
James DiCarlo (Massachusetts Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors