Poster
Overcoming Multi-model Forgetting
Yassine Benyahia · Kaicheng Yu · Kamil Bennani-Smires · Martin Jaggi · Anthony C. Davison · Mathieu Salzmann · Claudiu Musat

Wed Jun 12th 06:30 -- 09:00 PM @ Pacific Ballroom #19

We identify a phenomenon, which we refer to as multi-model forgetting, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters. To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model's shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search. Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.

Author Information

Yassine Benyahia (IPROVA)
Kaicheng Yu (EPFL)
Kamil Bennani-Smires (Swisscom)
Martin Jaggi (EPFL)
Anthony C. Davison (EPFL)
Mathieu Salzmann (EPFL)
Claudiu Musat (Swisscom)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors