BTSP-CAM: A Brain-Inspired Geometric Memory for Class-Incremental Learning
Abstract
Gradient-based optimization in class-incremental learning (CIL) often faces the plasticity–stability dilemma, since continuous weight updates can distort decision boundaries learned from earlier tasks. We revisit this problem from the viewpoint of stochastic geometric memory allocation and propose BTSP-CAM, a gradient-free memory system that instantiates theoretical insights from the hippocampal simpleBTSP model into a practical algorithm. Rather than fine-tuning a frozen encoder by backpropagation, BTSP-CAM externalizes plasticity into a binary synaptic matrix that evolves through local stochastic bit-flip updates. A trace-gated plateau process, driven by eligibility traces together with familiarity and collision signals, modulates when and where synapses are rewritten and suppresses cross-class interference in Hamming space. The resulting geometric memory states are mapped to semantic logits through a CA1-like competitive layer and a closed-form ridge readout, enabling fast consolidation after each task. Empirically, BTSP-CAM rivals gradient-based methods in a strictly exemplar-free setting and consistently boosts SOTA baselines as a lightweight plugin. Mechanistic analysis validates our geometric theory, confirming that stochastic repulsion actively bounds class overlap and stabilizes decision margins.