ParisKV: Fast and Drift-Robust KV-Cache Retrieval for Long-Context LLMs
Abstract
KV-cache retrieval is essential for long-context LLM inference, yet existing methods struggle with distribution drift and high latency at scale. We introduce ParisKV, a drift-robust, GPU-native KV-cache retrieval framework based on collision-based candidate selection, followed by a quantized inner-product reranking estimator. For million-token contexts, ParisKV supports CPU-offloaded KV caches via Unified Virtual Addressing (UVA), enabling on-demand top-k fetching with minimal overhead. ParisKV matches or outperforms full-attention quality on both long-input and long-generation benchmarks. It achieves state-of-the-art long-context decoding efficiency: it matches or exceeds full-attention speed even at batch size 1 for long contexts, delivers up to 2.8× higher throughput within full attention’s runnable range, and scales to million-token contexts where full attention runs out of memory. At million-token scale, ParisKV reduces decode latency by 17× and 44× compared to MagicPIG and PQCache, respectively—two state-of-the-art KV-cache top-k retrieval baselines.