Skip to yearly menu bar Skip to main content


Poster

Learning to Continually Learn with the Bayesian Principle

Soochan Lee · Hyeonseong Jeon · Jaehyeon Son · Gunhee Kim


Abstract:

In the present era of deep learning, continual learning research is mainly focused on mitigating forgetting when training a neural network with stochastic gradient descent (SGD) on a non-stationary stream of data.On the other hand, in the more classical literature of statistical machine learning, many models have sequential Bayesian update rules that yield the same learning outcome as the batch training, i.e., they are completely immune to catastrophic forgetting.However, they are often overly simple to model complex real-world data.In this work, we introduce a general meta-continual learning framework that combines neural networks' strong representational power and simple statistical models' robustness to forgetting.In our framework, continual learning takes place only in statistical models via ideal sequential Bayesian update rules, while neural networks are meta-learned to bridge the raw data and the statistical models.This approach not only achieves significantly improved performance but also exhibits excellent scalability.Since our approach is domain-agnostic and model-agnostic, it can be applied to a wide range of problems and easily integrated with existing model architectures.

Live content is unavailable. Log in and register to view live content