Skip to yearly menu bar Skip to main content


Poster

Language Models with Conformal Factuality Guarantees

Christopher Mohri · Tatsunori Hashimoto

Hall C 4-9 #805
[ ] [ Paper PDF ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Guaranteeing the correctness and factuality of language model (LM) outputs is a major open problem. In this work, we propose conformal factuality, a framework that can ensure high probability correctness guarantees for LMs by connecting language modeling and conformal prediction. Our insight is that the correctness of an LM output is equivalent to an uncertainty quantification problem, where the uncertainty sets are defined as the entailment set of an LM's output. Using this connection, we show that conformal prediction in language models corresponds to a back-off algorithm that provides high probability correctness guarantees by progressively making LM outputs less specific (and expanding the associated uncertainty sets). This approach applies to any black-box LM and requires very few human-annotated samples. Evaluations of our approach on closed book QA (FActScore, NaturalQuestions) and reasoning tasks (MATH) show that our approach can provide 80-90% correctness guarantees while retaining the majority of the LM's original output.

Chat is not available.