Emergent Visual Representations through Unsupervised Spiking Networks with Synaptic Pruning
Abstract
Recent work has shown that brain-aligned visual representations can emerge even in randomly initialized, high-dimensional neural networks, suggesting that cortical representations may be discovered rather than fully learned through task optimization. However, how such latent brain-relevant representations are stabilized and refined during development remains unclear. Motivated by this perspective and by neuroscientific evidence of activity-dependent synaptic pruning, we study how brain-aligned representations can emerge and be refined from high-dimensional unsupervised spiking systems. We propose a biologically grounded deep spiking neural network (SNN) that integrates unsupervised learning with developmental pruning dynamics. Starting from an overcomplete spiking architecture, the model self-organizes through sensory-driven activity while selectively eliminating weak or redundant synapses, progressively yielding compact and informative representations. Without using labels, the resulting network forms hierarchical visual representations that strongly align with neural responses across multiple areas of the mouse and macaque visual cortex, outperforming supervised and unsupervised ANN and SNN baselines. Synaptic pruning consistently enhances this alignment and further improves robustness under noisy and few-shot recognition settings. By unifying high-dimensional unsupervised spiking representations with activity-dependent synaptic pruning, this work provides a computational account of developmental refinement in visual cortex and bridges recent findings on emergent brain alignment in random networks with biologically grounded models of representation learning and structure formation.