Skip to yearly menu bar Skip to main content


Poster

Repoformer: Selective Retrieval for Repository-Level Code Completion

Di Wu · Wasi Ahmad · Dejiao Zhang · Murali Krishna Ramanathan · Xiaofei Ma

Hall C 4-9 #700
[ ] [ Project Page ] [ Paper PDF ]
[ Slides [ Poster
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT
 
Oral presentation: Oral 4D Retrieval
Wed 24 Jul 7:30 a.m. PDT — 8:30 a.m. PDT

Abstract:

Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion. However, the invariable use of retrieval in existing methods exposes issues in both efficiency and robustness, with a large proportion of the retrieved contexts proving unhelpful or harmful to code language models (code LMs). In this paper, we propose a selective RAG framework to avoid retrieval when unnecessary. To power this framework, we design a self-supervised learning approach to enable a code LM to accurately self-evaluate whether retrieval can improve its output quality and robustly leverage the potentially noisy retrieved contexts. Using this LM as both the selective RAG policy and the generation model, our framework achieves state-of-the-art repository-level code completion performance on diverse benchmarks including RepoEval, CrossCodeEval, and CrossCodeLongEval, a new long-form code completion benchmark. Meanwhile, our analyses show that selectively retrieving brings as much as 70% inference speedup in the online serving setting without harming the performance. We further demonstrate that our framework is able to accommodate different generation models, retrievers, and programming languages. These advancements position our framework as an important step towards more accurate and efficient repository-level code completion.

Chat is not available.