Stabilized Supralinear Networks Learn to Switch Coding Strategies Balancing Cost and Performance
Abstract
Lateral connections (LCs) are ubiquitous in the cortical circuits. While modern deep learning architectures have rich intralayer interactions (e.g., convolutional mixing, normalization, or attention) to support feature selectivity and contextual modulation, explicit excitatory and inhibitory (E-I) LCs remain underexplored and unjustified additions for encoding models in both deep learning and neuroscience. In this work, we analyze and train stabilized supralinear networks (SSNs) with sufficiently strong recurrent excitation and feedback inhibition, using local unsupervised plasticity rules under natural image stimulation. We demonstrate that these LCs support a transition between dynamical regimes under different input conditions. During the transition, the network shifts from employing population coding to sparse coding balancing cost and performance: population coding extracts robust features from low-contrast or noisy inputs by recruiting more neurons while sparse coding encodes high-contrast, proper inputs efficiently with minimal cost. These results are then compared against sparse coding and ICA-based models. Our findings frame explicit E-I recurrent neural networks through the lens of dynamic coding strategies and provide insights into designing more adaptive and robust systems with a concrete example in vision.