Timezone: »
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.
Author Information
Friedemann Zenke (Stanford)
Ben Poole (Stanford University)
Surya Ganguli (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Continual Learning Through Synaptic Intelligence »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #46
More from the Same Authors
-
2022 : Pre-Training on a Data Diet: Identifying Sufficient Examples for Early Training »
Mansheej Paul · Brett Larsen · Surya Ganguli · Jonathan Frankle · Gintare Karolina Dziugaite -
2023 : A strong implicit bias in SGD dynamics towards much simpler subnetworks through stochastic collapse to invariant sets, Surya Ganguli »
Surya Ganguli -
2021 Poster: Understanding self-supervised learning dynamics without contrastive pairs »
Yuandong Tian · Xinlei Chen · Surya Ganguli -
2021 Poster: A theory of high dimensional regression with arbitrary correlations between input features and target functions: sample complexity, multiple descent curves and a hierarchy of phase transitions »
Gabriel Mel · Surya Ganguli -
2021 Spotlight: A theory of high dimensional regression with arbitrary correlations between input features and target functions: sample complexity, multiple descent curves and a hierarchy of phase transitions »
Gabriel Mel · Surya Ganguli -
2021 Oral: Understanding self-supervised learning dynamics without contrastive pairs »
Yuandong Tian · Xinlei Chen · Surya Ganguli -
2020 Poster: Weakly-Supervised Disentanglement Without Compromises »
Francesco Locatello · Ben Poole · Gunnar Ratsch · Bernhard Schölkopf · Olivier Bachem · Michael Tschannen -
2020 Poster: Two Routes to Scalable Credit Assignment without Weight Symmetry »
Daniel Kunin · Aran Nayebi · Javier Sagastuy-Brena · Surya Ganguli · Jonathan Bloom · Daniel Yamins -
2020 Poster: On Implicit Regularization in $\beta$-VAEs »
Abhishek Kumar · Ben Poole -
2019 Workshop: Theoretical Physics for Deep Learning »
Jaehoon Lee · Jeffrey Pennington · Yasaman Bahri · Max Welling · Surya Ganguli · Joan Bruna -
2019 : Opening Remarks »
Jaehoon Lee · Jeffrey Pennington · Yasaman Bahri · Max Welling · Surya Ganguli · Joan Bruna -
2019 Poster: On Variational Bounds of Mutual Information »
Ben Poole · Sherjil Ozair · Aäron van den Oord · Alexander Alemi · George Tucker -
2019 Oral: On Variational Bounds of Mutual Information »
Ben Poole · Sherjil Ozair · Aäron van den Oord · Alexander Alemi · George Tucker -
2018 Poster: Fixing a Broken ELBO »
Alexander Alemi · Ben Poole · Ian Fischer · Joshua V Dillon · Rif Saurous · Kevin Murphy -
2018 Oral: Fixing a Broken ELBO »
Alexander Alemi · Ben Poole · Ian Fischer · Joshua V Dillon · Rif Saurous · Kevin Murphy -
2017 Poster: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein -
2017 Talk: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein