Semantic Cache Distillation: Efficient State Transfer via Reuse and Selective Patching
Qianli Ma ⋅ Zhiqing Tang ⋅ Hanshuai Cui ⋅ Zhi Yao ⋅ Weijia Jia
Abstract
Disaggregated serving alleviates memory bottlenecks in Large Language Model (LLM) inference but creates a severe communication bottleneck: transmitting high-dimensional Key-Value (KV) caches often dominates time-to-first-token (TTFT). Moreover, reusing caches across heterogeneous models (e.g., base and fine-tuned variants) causes semantic misalignment that accumulates over layers, degrading generation quality. We propose Semantic Cache Distillation (SCD), a loss-constrained framework that replaces raw KV transmission with compact semantic codes. SCD addresses these challenges via two mechanisms: (1) \textsc{Reuse}, which reconstructs most layers from low-rank subspaces to minimize transfer cost, and (2) \textsc{Patch}, which predicts normalized inputs at sparse transition layers to truncate error propagation. Empirically, SCD reduces data transfer by up to 2.65$\times$ and outperforms quantization and selective recomputation baselines in bandwidth-constrained regimes, maintaining generation quality within 5\% of the oracle.
Successful Page Load