Poster

How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective

Akhilan Boopathy · Ila R. Fiete

Hall E #222

Keywords: [ Deep Learning ] [ DL: Theory ] [ DL: Algorithms ] [ T: Deep Learning ] [ APP: Neuroscience, Cognitive Science ]

[ Abstract ]
[ Paper PDF
Tue 19 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
 
Spotlight presentation: APP: Neuroscience, Cognitive Science
Tue 19 Jul 1:15 p.m. PDT — 2:45 p.m. PDT

Abstract:

Recent works have examined theoretical and empirical properties of wide neural networks trained in the Neural Tangent Kernel (NTK) regime. Given that biological neural networks are much wider than their artificial counterparts, we consider NTK regime wide neural networks as a possible model of biological neural networks. Leveraging NTK theory, we show theoretically that gradient descent drives layerwise weight updates that are aligned with their input activity correlations weighted by error, and demonstrate empirically that the result also holds in finite-width wide networks. The alignment result allows us to formulate a family of biologically-motivated, backpropagation-free learning rules that are theoretically equivalent to backpropagation in infinite-width networks. We test these learning rules on benchmark problems in feedforward and recurrent neural networks and demonstrate, in wide networks, comparable performance to backpropagation. The proposed rules are particularly effective in low data regimes, which are common in biological learning settings.

Chat is not available.