Timezone: »
Recent works have examined theoretical and empirical properties of wide neural networks trained in the Neural Tangent Kernel (NTK) regime. Given that biological neural networks are much wider than their artificial counterparts, we consider NTK regime wide neural networks as a possible model of biological neural networks. Leveraging NTK theory, we show theoretically that gradient descent drives layerwise weight updates that are aligned with their input activity correlations weighted by error, and demonstrate empirically that the result also holds in finite-width wide networks. The alignment result allows us to formulate a family of biologically-motivated, backpropagation-free learning rules that are theoretically equivalent to backpropagation in infinite-width networks. We test these learning rules on benchmark problems in feedforward and recurrent neural networks and demonstrate, in wide networks, comparable performance to backpropagation. The proposed rules are particularly effective in low data regimes, which are common in biological learning settings.
Author Information
Akhilan Boopathy (MIT)
Ila R. Fiete (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Tue. Jul 19th through Wed the 20th Room Hall E #222
More from the Same Authors
-
2022 : No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit »
Rylan Schaeffer · Mikail Khona · Ila R. Fiete -
2023 : Optimizing protein fitness using Bi-level Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Optimizing protein fitness using Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 Poster: Model-agnostic Measure of Generalization Difficulty »
Akhilan Boopathy · Kevin Liu · Jaedong Hwang · Shu Ge · Asaad Mohammedsaleh · Ila R. Fiete -
2022 Poster: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Spotlight: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Poster: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2022 Spotlight: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2020 Poster: Proper Network Interpretability Helps Adversarial Robustness in Classification »
Akhilan Boopathy · Sijia Liu · Gaoyuan Zhang · Cynthia Liu · Pin-Yu Chen · Shiyu Chang · Luca Daniel -
2019 Poster: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel -
2019 Oral: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel