The Hippocampal Place Field Gradient: A Bio-inspired Framework Building Multiscale Representation for Better Sample Efficiency
Abstract
The hippocampus encodes space through a striking gradient of place field sizes along its dorsal-ventral axis, yet the principles generating this continuous gradient from discrete grid cell inputs remain unclear. We propose a unified theoretical framework establishing how multiscale hippocampal place fields arise from the frequency-dependent decay of grid cell projections. Functionally, this organization establishes an inductive bias in the population code, managing a fundamental trade-off between spatial precision and sample efficiency. Translating this insight to artificial neural networks, we incorporate a hippocampus-inspired positional embedding (HIPE) into the Transformer architecture to induce multi-scale representation. Experimental results confirm that this mechanism effectively improves data efficiency. Our work establishes a link between neural connectivity, activity patterns, and learning, suggesting a principled approach to utilizing multi-scale representations for sample-efficiency learning.