Skip to yearly menu bar Skip to main content


Poster

LCANets: Lateral Competition Improves Robustness Against Corruption and Attack

Michael Teti · Garrett T Kenyon · Benjamin Migliori · Juston Moore

Hall E #232

Keywords: [ DL: Robustness ] [ DL: Generative Models and Autoencoders ] [ APP: Computer Vision ] [ MISC: Representation Learning ] [ MISC: Unsupervised and Semi-supervised Learning ] [ APP: Neuroscience, Cognitive Science ]


Abstract:

Although Convolutional Neural Networks (CNNs) achieve high accuracy on image recognition tasks, they lack robustness against realistic corruptions and fail catastrophically when deliberately attacked. Previous CNNs with representations similar to primary visual cortex (V1) were more robust to adversarial attacks on images than current adversarial defense techniques, but they required training on large-scale neural recordings or handcrafting neuroscientific models. Motivated by evidence that neural activity in V1 is sparse, we develop a class of hybrid CNNs, called LCANets, which feature a frontend that performs sparse coding via local lateral competition. We demonstrate that LCANets achieve competitive clean accuracy to standard CNNs on action and image recognition tasks and significantly greater accuracy under various image corruptions. We also perform the first adversarial attacks with full knowledge of a sparse coding CNN layer by attacking LCANets with white-box and black-box attacks, and we show that, contrary to previous hypotheses, sparse coding layers are not very robust to white-box attacks. Finally, we propose a way to use sparse coding layers as a plug-and-play robust frontend by showing that they significantly increase the robustness of adversarially-trained CNNs over corruptions and attacks.

Chat is not available.