Timezone: »

PAC Prediction Sets for Large Language Models of Code
Adam Khakhar · Stephen Mell · Osbert Bastani

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #435

Prediction sets have recently been shown to be a promising strategy for quantifying the uncertainty of deep neural networks in a way that provides theoretical guarantees. However, existing techniques have largely targeted settings where the space of labels is simple, so prediction sets can be arbitrary subsets of labels. For structured prediction problems where the space of labels is exponential in size, even prediction sets containing a small fraction of all labels can be exponentially large. In the context of code generation, we propose a solution that considers a restricted set of prediction sets that can compactly be represented as partial programs, which are programs with portions replaced with holes. Given a trained code generation model, our algorithm leverages a programming language's abstract syntax tree to generate a set of programs such that the correct program is in the set with high-confidence. Valuable applications of our algorithm include a Codex-style code generator with holes in uncertain parts of the generated code, which provides a partial program with theoretical guarantees. We evaluate our approach on PICARD (a T5 model for SQL semantic parsing) and Codex (a GPT model for over a dozen programming languages, including Python), demonstrating that our approach generates compact PAC prediction sets. This is the first research contribution that generates PAC prediction sets for generative code models.

Author Information

Adam Khakhar (University of Pennsylvania)
Stephen Mell (Department of Computer and Information Science, School of Engineering and Applied Science)
Osbert Bastani (University of Pennsylvania)

More from the Same Authors