Timezone: »

 
Poster
Continual Learning via Sequential Function-Space Variational Inference
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #533

Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data. However, applying it to neural networks has proved challenging in practice. Addressing the drawbacks of existing techniques, we propose an optimization objective derived by formulating continual learning as sequential function-space variational inference. In contrast to existing methods that regularize neural network parameters directly, this objective allows parameters to vary widely during training, enabling better adaptation to new tasks. Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions and more effective regularization. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.

Author Information

Tim G. J Rudner (University of Oxford)
Freddie Bickford Smith (University of Oxford)
QIXUAN FENG (University of Oxford)
Yee-Whye Teh (Oxford and DeepMind)
Yarin Gal (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors