Skip to yearly menu bar Skip to main content


Poster

Larimar: Large Language Models with Episodic Memory Control

Payel Das · Subhajit Chaudhury · Elliot Nelson · Igor Melnyk · Sarath Swaminathan · Sophie Dai · Aurelie Lozano · Georgios Kollias · Vijil Chenthamarakshan · Jiri Navratil · Soham Dan · Pin-Yu Chen

Hall C 4-9 #804
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed---yielding speed-ups of 8-10x depending on the base LLM ---as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar.

Chat is not available.