From Representation to Action: A Unified Laplacian Framework for Spatial Representation and Path Planning
Abstract
Navigation in complex environments relies on internal spatial representations that guide action. While the brain employs a diverse repertoire of spatial tuning cells—including grid, place, and head-direction cells—a normative theory linking these static neural codes to the dynamic process of navigation remains elusive. In this work, we propose a Unified Laplacian Framework derived from first principles of representational smoothness and efficiency. We first demonstrate that diverse spatial codes emerge naturally as spectral decompositions of the Laplacian operator. Crucially, bridging the gap from representation to action, we derive a biologically plausible navigation policy based on the Green's function potential. We show that this potential encodes the environment's intrinsic geometry to enable simple, trap-free gradient ascent, achieving significantly improved sample efficiency and generalization in goal-reaching tasks. Furthermore, we demonstrate that these spectral representations can be learned directly from high-dimensional visual inputs, confirming its plausibility in realistic environments. Our results suggest that the "cognitive map" can be viewed as a spectral embedding of the Laplacian, providing a rigorous foundation for spatial cognition in both biological and artificial agents.