Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Scaling the Vocabulary of Non-autoregressive Models for Efficient Generative Retrieval
Ravisri Valluri · Akash Kumar Mohankumar · Kushal Dave · Amit Singh · Jian Jiao · Manik Varma · Gaurav Sinha
Keywords: [ Information Retrieval ] [ fast inference ] [ non-autoregressive generation ] [ generative retrieval ] [ Natural Language Processing ]
Generative Retrieval introduces a new approach to Information Retrieval by reframing it as a constrained generation task, leveraging recent advancements in Autoregressive (AR) language models. However, AR-based Generative Retrieval methods suffer from high inference latency and cost compared to traditional dense retrieval techniques, limiting their practical applicability. This paper investigates fully Non-autoregressive (NAR) language models as a more efficient alternative for generative retrieval. While standard NAR models alleviate latency and cost concerns, they exhibit a significant drop in retrieval performance (compared to AR models) due to their inability to capture dependencies between target tokens. To address this, we question the conventional choice of limiting the target token space to solely words or sub-words. We propose PIXAR, a novel approach that expands the target vocabulary of NAR models to include multi-word entities and common phrases (up to 5 million tokens), thereby reducing token dependencies. PIXAR employs inference optimization strategies to maintain low inference latency despite the significantly larger vocabulary. Our results demonstrate that PIXAR achieves a relative improvement of 31.0\% in MRR@10 on MS MARCO and 23.2\% in Hits@5 on Natural Questions compared to standard NAR models with similar latency and cost. Furthermore, online A/B experiments on a large commercial search engine show that PIXAR increases ad clicks by 5.08\% and revenue by 4.02\%.