Poster
in
Workshop: New Frontiers in Adversarial Machine Learning
Layerwise Hebbian/anti-Hebbian (HaH) Learning In Deep Networks: A Neuro-inspired Approach To Robustness
Metehan Cekic · Can Bakiskan · Upamanyu Madhow
Abstract:
We propose a neuro-inspired approach for engineering robustness into deep neural networks (DNNs), in which end-to-end cost functions are supplemented with layer-wise costs promoting Hebbian (“fire together,” “wire together”) updates for highly active neurons, and anti-Hebbian updates for the remaining neurons. Unlike standard end-to-end training, which does not directly exert control over the features extracted at intermediate layers, Hebbian/anti-Hebbian (HaH) learning is aimed at producing sparse, strong activations which are more difficult to corrupt. We further encourage sparsity by introducing competition between neurons via divisive normalization and thresholding, together with implicit $\ell_2$ normalization of neuronal weights, instead of batch norm. Preliminary CIFAR-10 experiments demonstrate that our neuro-inspired model, trained without augmentation by noise or adversarial perturbations, is substantially more robust to a range of corruptions than a baseline end-to-end trained model. This opens up exciting research frontiers for training robust DNNs, with layer-wise costs providing a strategy complementary to that of data-augmented end-to-end training.
Chat is not available.