Skip to yearly menu bar Skip to main content


Poster

Accelerating Iterative Retrieval-augmented Language Model Serving with Speculation

Zhihao Zhang · Alan Zhu · Lijie Yang · Yihua Xu · Lanting Li · Phitchaya Phothilimthana · Zhihao Jia

Hall C 4-9 #608
[ ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract: This paper introduces RaLMSpec, a framework that accelerates iterative retrieval-augmented language model (RaLM) with *speculative retrieval* and *batched verification*. RaLMSpec further introduces several important systems optimizations, including prefetching, optimal speculation stride scheduler, and asynchronous verification. The combination of these techniques allows RaLMSPec to significantly outperform existing systems. For document-level iterative RaLM serving, evaluation over three LLMs on four QA datasets shows that RaLMSpec improves over existing approaches by $1.75$-$2.39\times$, $1.04$-$1.39\times$, and $1.31$-$1.77\times$ when the retriever is an exact dense retriever, approximate dense retriever, and sparse retriever respectively. For token-level iterative RaLM (KNN-LM) serving, RaLMSpec is up to $7.59\times$ and $2.45\times$ faster than existing methods for exact dense and approximate dense retrievers, respectively.

Live content is unavailable. Log in and register to view live content