Diversity-aware Weight Perturbation Promotes Robust Adaptation
Abstract
Compute-In-Memory (CIM) accelerators are promising for energy-efficient edge inference, yet they faces fundamental challenges when deploying Deep Neural Networks (DNNs), as hardware-induced weight perturbations from intrinsic noise and device drift degrade accuracy and impede reliable inference. To tackle this challenge, we propose Diversity-aware Weight Perturbation (DWP), an immune-system-inspired training method that emulates affinity-based selection by exploiting sample-level prediction disagreement under diverse noise realizations to guide adaptive sample weighting, building robustness to weight perturbation. Experiments show that DWP-trained models consistently yield superior robustness, achieving over 15\% accuracy improvements compared to standard-trained models under severe weight perturbations (mismatch level up to 70\%) and maintaining inference accuracy at 90\% over a simulated one-year CIM operation with only 2\%–4\% variation in accuracy. Moreover, under matched model and inference configurations, deployment on low-precision CIM hardware reduces inference energy by 38\% compared to a GPU baseline. These results demonstrate that DWP enables robust and energy-efficient neural network deployment on resource-constrained edge devices with inherent hardware uncertainties.