Skip to yearly menu bar Skip to main content


Poster

Emergence of Sparse Representations from Noise

Trenton Bricken · Rylan Schaeffer · Bruno Olshausen · Gabriel Kreiman

Exhibit Hall 1 #434
[ ]
[ PDF [ Poster

Abstract:

A hallmark of biological neural networks, which distinguishes them from their artificial counterparts, is the high degree of sparsity in their activations. This discrepancy raises three questions our work helps to answer: (i) Why are biological networks so sparse? (ii) What are the benefits of this sparsity? (iii) How can these benefits be utilized by deep learning models? Our answers to all of these questions center around training networks to handle random noise. Surprisingly, we discover that noisy training introduces three implicit loss terms that result in sparsely firing neurons specializing to high variance features of the dataset. When trained to reconstruct noisy-CIFAR10, neurons learn biological receptive fields. More broadly, noisy training presents a new approach to potentially increase model interpretability with additional benefits to robustness and computational efficiency.

Chat is not available.