Timezone: »
Content-addressable memory (CAM) networks, so-called because stored items can be recalled by partial or corrupted versions of the items, exhibit near-perfect recall of a small number of information-dense patterns below capacity and a 'memory cliff' beyond, such that inserting a single additional pattern results in catastrophic loss of all stored patterns. We propose a novel CAM architecture, Memory Scaffold with Heteroassociation (MESH), that factorizes the problems of internal attractor dynamics and association with external content to generate a CAM continuum without a memory cliff: Small numbers of patterns are stored with complete information recovery matching standard CAMs, while inserting more patterns still results in partial recall of every pattern, with a graceful trade-off between pattern number and pattern richness. Motivated by the architecture of the Entorhinal-Hippocampal memory circuit in the brain, MESH is a tripartite architecture with pairwise interactions that uses a predetermined set of internally stabilized states together with heteroassociation between the internal states and arbitrary external patterns. We show analytically and experimentally that for any number of stored patterns, MESH nearly saturates the total information bound (given by the number of synapses) for CAM networks, outperforming all existing CAM models.
Author Information
Sugandha Sharma (MIT)
Sarthak Chandra (Massachusetts Institute of Technology)
Ila R. Fiete (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Tue. Jul 19th through Wed the 20th Room Hall E #235
More from the Same Authors
-
2022 : P08: Map Induction: Compositional Spatial Submap Learning for Efficient Exploration in Novel Environments »
Sugandha Sharma -
2022 : No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit »
Rylan Schaeffer · Mikail Khona · Ila R. Fiete -
2023 : Optimizing protein fitness using Bi-level Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Optimizing protein fitness using Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 Poster: Model-agnostic Measure of Generalization Difficulty »
Akhilan Boopathy · Kevin Liu · Jaedong Hwang · Shu Ge · Asaad Mohammedsaleh · Ila R. Fiete -
2022 : Contributed Spotlight Talks: Part 1 »
David Dohan · Winnie Xu · Sugandha Sharma · Tan Zhi-Xuan -
2022 Poster: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Spotlight: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Poster: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete -
2022 Spotlight: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete