Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Uncertainty and Robustness in Deep Learning

How Can We Be So Dense? The Robustness of Highly Sparse Representations

Subutai Ahmad


Abstract:

Neural networks can be highly sensitive to noise and perturbations. In this paper we suggest that high dimensional sparse representations can lead to increased robustness to noise and interference. A key intuition we develop is that the ratio of the match volume around a sparse vector divided by the total representational space decreases exponentially with dimensionality, leading to highly robust matching with low interference from other patterns. We analyze efficient sparse networks containing both sparse weights and sparse activations. Simulations on MNIST, the Google Speech Command Dataset, and CIFAR-10 show that such networks demonstrate improved robustness to random noise compared to dense networks, while maintaining competitive accuracy. We propose that sparsity should be a core design constraint for creating highly robust networks.

Live content is unavailable. Log in and register to view live content