EpiCache: Episodic KV Cache Management for Long-Term Conversation on Resource-Constrained Environments
Minsoo Kim ⋅ Arnav Kundu ⋅ Han-Byul Kim ⋅ Richa Dixit ⋅ Minsik Cho
Abstract
Modern large language models (LLMs) extend context lengths to millions of tokens, enabling coherent, personalized responses grounded in long conversational history. However, the Key-Value (KV) cache grows linearly with the extended dialogue history, causing the model’s memory footprint to quickly exceed device limits. While recent KV cache compression methods attempt to reduce memory usage, most apply cache eviction after processing the entire context, incurring unbounded peak memory usage. Additionally, query-dependent eviction narrows the cache semantics to a single query, leading to failure cases in multi-turn conversations. In this paper, we introduce EpiCache, a training-free KV cache management framework for long conversational question answering (LongConvQA) under fixed memory budgets. EpiCache bounds cache growth through block-wise prefill and preserves topic-relevant context via episodic KV compression, which clusters conversation history into coherent episodes and performs episode-specific KV cache eviction. Across three LongConvQA benchmarks (LongMemEval, Realtalk, and LoCoMo), EpiCache improves accuracy by up to 30\%, achieves near-full-cache accuracy under $4$–$6\times$ compression, and reduces latency and peak memory by up to $2.4\times$ and $3.7\times$, respectively.
Successful Page Load