Skip to yearly menu bar Skip to main content


Poster

OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference

Yuzhe Gu ⋅ Xiyu Liang ⋅ Jiaojiao Zhao ⋅ Enmao Diao

Abstract

Log in and register to view live content