Practical Mechanism for Fault-Tolerant Spiking Neural Networks via Simple Input Control Based on Learnable Fragmentation
Abstract
Spiking Neural Networks (SNNs) are regarded as the third generation of neural networks, offering energy-efficient computing for neuromorphic devices. Despite this benefit, hardware-implemented SNNs are vulnerable to hardware faults, which severely degrade their performance. Previous approaches have required direct access to internal SNN circuits to modify weights or monitor internal states, limiting their practicality. Improving robustness to hardware faults without such access remains challenging. To overcome this challenge, we propose a fault-tolerant mechanism that operates only through input data control. The hardware faults reduce the usable learning capacity of SNNs, leading to a mismatch between the instantaneous input load and the degraded network dynamics. Our mechanism mitigates this mismatch by dividing each input sample into multiple fragments, redistributing the input load via a learnable fragmentation strategy. The strategy learns two key fragmentation components: 1) division boundaries and 2) the number of fragments. To our knowledge, this is the first to improve the fault tolerance of SNNs without accessing the internal SNN circuit. Experimental results demonstrate that our mechanism consistently outperforms previous methods in various SNN models, achieving these gains without direct access to internal circuits. Furthermore, we validate its effectiveness on SNNs implemented with a physical FPGA platform, confirming its practicality.