STAR-KV: Low-Rank KV Cache Compression via Soft Thresholding for Adaptive Rank Control
Priyansh Bhatnagar ⋅ Ashkan Moradirouzabadi ⋅ Se-Hyun Yang ⋅ SeungJae Lee ⋅ Jungwook Choi ⋅ Mingu Kang
Abstract
Low-rank projection has emerged as a promising approach for compressing the KV cache by exploiting hidden-dimension redundancy. However, prior methods rely on fixed or heuristic rank selection and struggle to achieve aggressive compression with minimal accuracy degradation. We propose STAR-KV, an adaptive low-rank KV cache compression framework with fine-grained rank control. STAR-KV encompasses 1) a differentiable thresholding mechanism that enables optimal rank selection at both attention-head and block levels, 2) a hybrid decomposition strategy that applies different low-rank factorizations according to the sensitivity of key and value projections, and 3) a low-rank--aware mixed precision quantization that leverages data statistics for near lossless low-bit quantization. Evaluated across multiple LLMs and benchmarks, STAR-KV achieves up to 75\% KV cache compression and up to 20$\times$ overall KV cache reduction when combined with quantization. Enabled by custom Triton-based GPU kernels, STAR-KV delivers up to 6.9$\times$ speedup for the attention module and 3.1$\times$ end-to-end generation throughput. The source code will be publicly available in the future.
Successful Page Load