Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Continual Deep Learning on the Edge via Stochastic Local Competition among Subnetworks
Theodoros Christophides · Kyriakos Tolias · Sotirios Chatzis
Keywords: [ Continual Deep Learning ] [ Edge ] [ Stochastic Local Competition ]
Continual learning on edge devices poses unique challenges due tostringent resource constraints. This paper introduces a novel methodthat leverages stochastic competition principles to promote sparsity,significantly reducing deep network memory footprint and computationaldemand. Specifically, we propose deep networks that comprise blocksof units that compete locally to win the representation of each arisingnew task; competition takes place in a stochastic manner. This typeof network organization results in sparse task-specific representationsfrom each network layer; the sparsity pattern is obtained during trainingand is different among tasks. Crucially, our method sparsifies boththe weights and the weight gradients, thus facilitating training onedge devices. This is performed on the grounds of winning probabilityfor each unit in a block. During inference, the network retains onlythe winning unit and zeroes-out all weights pertaining to non-winningunits for the task at hand. Thus, our approach is specifically tailoredfor deployment on edge devices, providing an efficient and scalablesolution for continual learning in resource-limited environments.