Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)

Efficient Document Ranking with Learnable Late Interactions

Himanshu Jain · Ziwei Ji · Ankit Singh Rawat · Andreas Veit · Sadeep Jayasumana · Sashank J. Reddi · Aditya Menon · Felix Xinnan Yu


Abstract:

Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for predicting query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query-document embeddings; usually, the former has higher quality while the latter has lower latency.Recently, late-interaction models have been proposed to realize more favorable latency-quality trade-offs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden over DE models. In this paper, we propose novel \emph{learnable} late-interaction models (LITE) that resolve these issues.Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks such as MS MARCO and Natural Questions, and out-of-domain tasks such as BEIR. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25 times storage compared to ColBERT.

Chat is not available.