E-mem: Multi-Agent Based Episodic Context Reconstruction for LLM Agent Memory
Abstract
The evolution of Large Language Model (LLM) agents towards System~2 reasoning, characterized by deliberative, high-precision problem-solving, necessitates maintaining rigorous logical integrity over extended horizons. However, prevalent memory preprocessing paradigms incur destructive de-contextualization. By compressing fluid sequential dependencies into pre-defined structures (e.g., embeddings or graphs), these methods sever the narrative integrity essential for deep reasoning. To address this, we propose E-mem, a framework shifting from Memory Preprocessing to Episodic Context Reconstruction inspired by biological engrams. E-mem employs a heterogeneous hierarchical architecture where multiple assistant agents maintain uncompressed memory contexts, while a central master agent orchestrates global planning. Unlike passive retrieval, our mechanism empowers assistants to locally reason within activated segments, extracting context-aware evidence before aggregation. Evaluations on the LoCoMo benchmark demonstrate that E-mem achieves over 54\% F1—surpassing the state-of-the-art GAM by 7.75\%—while reducing token cost by over 70\%. Our work is available on \url{https://anonymous.4open.science/r/E-mem-F6C3/}.