Neural networks trained with backpropagation often struggle to identify classes that have been observed a small number of times. In applications where most class labels are rare, such as language modelling, this can become a performance bottleneck. One potential remedy is to augment the network with a fast-learning non-parametric model which stores recent activations and class labels into an external memory. We explore a simplified architecture where we treat a subset of the model parameters as fast memory stores. This can help retain information over longer time intervals than a traditional memory, and does not require additional space or compute. In the case of image classification, we display faster binding of novel classes on an Omniglot image curriculum task. We also show improved performance for word-based language models on news reports (GigaWord), books (Project Gutenberg) and Wikipedia articles (WikiText-103) - the latter achieving a state-of-the-art perplexity of 29.2.
Author Information
Jack Rae (DeepMind)
Chris Dyer (DeepMind)
Peter Dayan (UCL)
Tim Lillicrap (Google DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Fast Parametric Learning with Activation Memorization »
Wed Jul 11th 04:40 -- 04:50 PM Room Victoria
More from the Same Authors
-
2019 Poster: Learning Latent Dynamics for Planning from Pixels »
Danijar Hafner · Timothy Lillicrap · Ian Fischer · Ruben Villegas · David Ha · Honglak Lee · James Davidson -
2019 Poster: Meta-Learning Neural Bloom Filters »
Jack Rae · Sergey Bartunov · Timothy Lillicrap -
2019 Oral: Meta-Learning Neural Bloom Filters »
Jack Rae · Sergey Bartunov · Timothy Lillicrap -
2019 Oral: Learning Latent Dynamics for Planning from Pixels »
Danijar Hafner · Timothy Lillicrap · Ian Fischer · Ruben Villegas · David Ha · Honglak Lee · James Davidson -
2019 Poster: Deep Compressed Sensing »
Yan Wu · Mihaela Rosca · Timothy Lillicrap -
2019 Oral: Deep Compressed Sensing »
Yan Wu · Mihaela Rosca · Timothy Lillicrap -
2019 Poster: Composing Entropic Policies using Divergence Correction »
Jonathan Hunt · Andre Barreto · Timothy Lillicrap · Nicolas Heess -
2019 Poster: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2019 Oral: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2019 Oral: Composing Entropic Policies using Divergence Correction »
Jonathan Hunt · Andre Barreto · Timothy Lillicrap · Nicolas Heess -
2018 Poster: Measuring abstract reasoning in neural networks »
Adam Santoro · Feilx Hill · David GT Barrett · Ari S Morcos · Timothy Lillicrap -
2018 Oral: Measuring abstract reasoning in neural networks »
Adam Santoro · Feilx Hill · David GT Barrett · Ari S Morcos · Timothy Lillicrap -
2017 Workshop: Learning to Generate Natural Language »
Yishu Miao · Wang Ling · Tsung-Hsien Wen · Kris Cao · Daniela Gerz · Phil Blunsom · Chris Dyer -
2017 Poster: Learning to Learn without Gradient Descent by Gradient Descent »
Yutian Chen · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Timothy Lillicrap · Matthew Botvinick · Nando de Freitas -
2017 Talk: Learning to Learn without Gradient Descent by Gradient Descent »
Yutian Chen · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Timothy Lillicrap · Matthew Botvinick · Nando de Freitas