Skip to yearly menu bar Skip to main content


Poster

Shared Attractor Dynamics in Spatial Navigation and Language Parsing

Xiaolong Zou · Xingxing Cao · Xiaojiao Yang · Bo Hong


Abstract:

Increasing experimental evidence suggests that the human hippocampus, evolutionarily shaped by spatial navigation tasks, also play an important role in language comprehension, suggesting a shared computational mechanism for both spatial navigation and language information processing. However, the specific relationship between the hippocampal formation's computational mechanism in spatial navigation and its role in language processing remains elusive. To investigate this question, we develop a prefrontal-hippocampal-entorhinal model (which called PHE-trinity) that features two key aspects: 1) the use of a modular continuous attractor neural network to represent syntactic structure, akin to the grid network in the entorhinal cortex; 2) the creation of two separate input streams, mirroring the factorized structure-content representation found in the hippocampal formation. We evaluate our model in language command parsing tasks. Our findings include: 1) attractor dynamics can facilitate systematic generalization and efficient learning from limited data; 2) through visualization and reverse engineering, we unravel a potential dynamic mechanism for grid network representing syntactic structure. Our research takes an initial step in uncovering the dynamic mechanism shared by spatial navigation and language information processing.

Live content is unavailable. Log in and register to view live content