Modern AI systems have achieved impressive results in many specific domains, from image and speech recognition to natural language processing and mastering complex games such as chess and Go. However, they remain largely inflexible, fragile and narrow, unable to continually adapt to a wide range of changing environments and novel tasks without "catastrophically forgetting" what they have learned before, to infer higher-order abstractions allowing for systematic generalization to out-of-distribution data, and to achieve the level of robustness necessary to "survive" various perturbations in their environment - a natural property of most biological intelligent systems. In this talk, we will provide a brief overview of advances in continual learning (CL) field  which aims to push AI from "narrow" to "broad", from unsupervised adaptive ("neurogenetic") architectural adaptations  to a recent general supervised CL framework for quickly solving new, out-of-distribution tasks, combined with fast remembering of the previous ones; it unifies continual-, meta-, meta-continual-, and continual-meta learning and introduces continual-MAML, an online extension of the popular MAML algorithm . Furthermore, we present a brief overview of the most challenging setting - continual RL, characterized by dynamic, non-stationary environment, and discuss open problems and challenges in bridging the gap between the current state of continual RL and better incremental reinforcement learners that can function in increasingly human realistic learning environments . Next, we address the robust representation learning problem, i.e. extracting features invariant to various stochastic and/or adversarial perturbations of the environment - a common goal across continual-, meta-, transfer learning as well as adversarial robustness, out-of-distribution generalization, self-supervised learning, and related subfields. As an example, our recent Adversarial Feature Desensitization (AFD) approach  trains a feature extractor network to generate representations which are both predictive and robust to input perturbations (e.g. adversarial attacks) and demonstrates a significant improvement over the state-of-the-art, despite its relative simplicity (i.e., feature robustness is enforced via additional adversarial decoder with a GAN-like objective attempting to discriminate between the original and perturbed inputs). Finally, we conclude the talk with a discussion of severa directions for future work, which including drawing inspirations (e.g., inductive biases) from neuroscience , in order to develop truly broad and robust lifelong-learning AI systems.
 https://arxiv.org/abs/1909.08383 de Lange et al (2019) A continual learning survey: Defying forgetting in classification tasks.
 https://arxiv.org/abs/1701.06106 Garg et al (2017). Neurogenesis-Inspired Dictionary Learning: Online Model Adaptation in a Changing World. IJCAI 2017.
 https://arxiv.org/abs/2003.05856 Caccia et al (2020). Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning. submitted.
 (in preparation) Khetarpal et al (2020). Towards Continual Reinforcement Learning: A Review and Perspectives.
 https://arxiv.org/abs/2006.04621 Bashivan et al (2020). Adversarial Feature Desensitization. submitted.  https://xaqlab.com/wp-content/uploads/2019/09/LessArtificialIntelligence.pdf Sinz et al (2019). Engineering a Less Artificial Intelligence. Neuron.