Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: LatinX in AI (LXAI) Workshop

Studying Generalization on Memory-Based Methods in Continual Learning

Felipe del Rio · Julio Hurtado · Cristian Calderon · Alvaro Soto · Vincenzo Lomonaco

Keywords: [ systematic generalization ] [ Out-of-Distribution Generalization ] [ Spurious Correlations ] [ continual learning ]


Abstract:

One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting. To mitigate complete knowledge overwriting, memory-based methods store a percentage of previous data distributions to be used during training. Although these methods produce good results, few studies have tested their out-of-distribution generalization properties, as well as whether these methods overfit the replay memory. In this work, we show that although these methods can help in traditional in-distribution generalization, they can strongly impair out-of-distribution generalization by learning spurious features and correlations. Using a controlled environment, using the Synbol benchmark generator (Lacoste et al., 2020), we demonstrate that this lack of out-of-distribution generalization mainly occurs in the linear classifier.

Chat is not available.