Skip to yearly menu bar Skip to main content


Poster

Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning

Depeng Li · Tianqi Wang · Junwei Chen · Wei Dai · Zhigang Zeng

Hall C 4-9 #1701
[ ] [ Paper PDF ]
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model’s convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.

Chat is not available.