Timezone: »
Fundamental research in Neuroscience is currently undergoing a renaissance based on deep learning. The central promises of deep learning-based modeling of brain circuits are that the models shed light on evolutionary optimization problems, constraints and solutions, and generate novel predictions regarding neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one often gets neither. We begin by reviewing the principles of grid cell mechanism and function obtained from analytical and first-principles modeling efforts, then consider the claims of deep learning models of grid cells and rigorously examine their results under varied conditions. Using large-scale hyperparameter sweeps and hypothesis-driven experimentation, we demonstrate that the results of such models may reveal more about particular and non-fundamental implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. Finally, we discuss why it is that these models of the brain cannot be expected to work without the addition of substantial amounts of inductive bias, an informal No Free Lunch theorem for Neuroscience. In conclusion, caution and consideration, together with biological knowledge, are warranted in building and interpreting deep learning models in Neuroscience.
Author Information
Rylan Schaeffer (Stanford University)
Mikail Khona (Massachusetts Institute of technology)
Physics PhD student doing neuroscience and machine learning
Ila R. Fiete (MIT)
More from the Same Authors
-
2023 : FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation »
Dhruv Pai · Andres Carranza · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 : Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting »
Rylan Schaeffer · Kateryna Pistunova · Samar Khanna · Sarthak Consul · Sanmi Koyejo -
2023 : Optimizing protein fitness using Bi-level Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Are Emergent Abilities of Large Language Models a Mirage? »
Rylan Schaeffer · Brando Miranda · Sanmi Koyejo -
2023 : Optimizing protein fitness using Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Deceptive Alignment Monitoring »
Andres Carranza · Dhruv Pai · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 Poster: Model-agnostic Measure of Generalization Difficulty »
Akhilan Boopathy · Kevin Liu · Jaedong Hwang · Shu Ge · Asaad Mohammedsaleh · Ila R. Fiete -
2023 Poster: Emergence of Sparse Representations from Noise »
Trenton Bricken · Rylan Schaeffer · Bruno Olshausen · Gabriel Kreiman -
2022 Poster: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Spotlight: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Poster: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete -
2022 Poster: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2022 Spotlight: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2022 Spotlight: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete