Timezone: »

 
Oral
Adversarially trained neural representations are already as robust as biological neural representations
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo

Wed Jul 20 02:05 PM -- 02:25 PM (PDT) @ Room 301 - 303

Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work,we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well-founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.

Author Information

Chong Guo (Massachusetts Institute of Technology)
Michael Lee (MIT)
Guillaume Leclerc (MIT)
Joel Dapello (Harvard University)
Yug Rao (Purdue University)
Aleksander Madry (MIT)
James DiCarlo (Massachusetts Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors