Improving the Performance and Learning Stability of Parallelizable RNNs Designed for Ultra-Low Power Applications
Julien Brandoit ⋅ Arthur Fyon ⋅ Damien Ernst ⋅ Guillaume Drion
Abstract
Sequence learning is dominated by Transformers and parallelizable recurrent neural networks such as state-space models, yet learning long-term dependencies remains challenging, and state-of-the-art designs trade power consumption for performance. The Bistable Memory Recurrent Unit (BMRU) was introduced to enable hardware–software co-design of ultra-low power RNNs: quantized states with hysteresis provide persistent memory while mapping directly to analog primitives. However, BMRU performance lags behind parallelizable RNNs on complex sequential tasks. In this paper, we identify gradient blocking during state updates as a key limitation and propose a cumulative update formulation that restores gradient flow while preserving persistent memory, creating skip-connections through time. This leads to the Cumulative Memory Recurrent Unit (CMRU) and its relaxed variant, the $\alpha$CMRU. Experiments show that the cumulative formulation dramatically improves convergence stability and reduces initialization sensitivity. The CMRU and $\alpha$CMRU match the performance of Linear Recurrent Units (LRUs) and minimal Gated Recurrent Units (minGRUs) on standard benchmarks at small model sizes, while the CMRU retains quantized states, persistent memory, and noise-resilient dynamics essential for analog implementation.
Successful Page Load