LIF Recurrent Memory Enables Long-Horizon Spiking Computation
Fenghao Liu ⋅ Yipeng Shen ⋅ Peng Chen ⋅ Qian Zheng ⋅ Peng Lin ⋅ Gang Pan
Abstract
Processing long sequence data such as speech requires models to maintain long-term dependencies, which is challenging for recurrent spiking neural networks due to high temporal dynamics in neuron models that leak stored information in their membrane potentials, and due to vanishing gradients during backpropagation through time. These issues can be mitigated by employing more complex neuron designs, such as ALIF and TC-LIF, but these neuron-level solutions often incur high computational costs and complicate hardware implementation, undermining the efficiency advantages of Spiking neural networks. Here we propose an architectural-level solution that leverages the dynamical interactions of a few leaky integrate-and-fire (LIF) neurons to enhance long-term information storage. The memory capability of this LIF-based micro-circuit is adaptively modulated by global recurrent connections of the recurrent spiking neural network, contributing to selective enhancement of temporal information retention, and promoting stable gradient propagation through time. The proposed model outperforms previous methods including LSTM, ALIF, and TC-LIF in long sequence tasks, achieving 96.52\% accuracy on the PS-MNIST dataset. Furthermore, our method also provides a compelling efficiency advantage, yielding up to 277$\times$ computational efficiency improvement compared to conventional models such as LSTM. This work paves the way for building cost-effective, hardware-friendly, and interpretable spiking neural networks for long sequence modeling.
Successful Page Load