PersistBench: When Should Long-Term Memories Be Forgotten by LLMs?
Sidharth Pulipaka ⋅ Oliver Chen ⋅ Manas Sharma ⋅ Taaha Saleem Bajwa ⋅ Vyas Raina ⋅ Ivaxi Sheth
Abstract
Conversational assistants are increasingly integrating long-term memory with large language models (LLMs). This persistence of memories, e.g., the user is vegetarian, can enhance personalization in future conversations. However, the same persistence can also introduce safety risks that have been largely overlooked. Hence, we introduce \textbf{PersistBench} to measure the extent of these safety risks. We identify two long-term memory-specific risks: \textit{cross-domain leakage}, where LLMs inappropriately inject context from the long-term memories; and \textit{memory-induced sycophancy}, where stored long-term memories insidiously reinforce user biases. We evaluate 18 frontier and open-source LLMs on our benchmark. Our results reveal a surprisingly high failure rate across these LLMs - a median failure rate of $53\%$ on cross-domain samples and $97\%$ on sycophancy samples. To address this, our benchmark encourages the development of more robust and safer long-term memory usage in frontier conversational systems.
Successful Page Load