Skip to yearly menu bar Skip to main content


Poster

Linguistic Calibration of Language Models

Neil Band · Xuechen Li · Tengyu Ma · Tatsunori Hashimoto


Abstract:

Language models (LMs) may lead their users to make suboptimal downstream decisions when they confidently hallucinate. This issue can be mitigated by having the LM verbally convey the probability that a claim is correct, but existing models cannot produce text with calibrated confidence statements. Through the lens of decision-making, we formalize linguistic calibration: an LM is linguistically calibrated if its generations enable its users to make calibrated probabilistic predictions. This definition enables a training framework where a supervised finetuning step bootstraps an LM to emit long-form generations with confidence statements such as “I estimate a 30% chance of...” or “I am certain that...”, followed by a reinforcement learning step which rewards generations that enable a user to provide calibrated answers to related questions. We linguistically calibrate Llama 2 7B and find in automated and human evaluations of long-form generations that it is significantly more calibrated than strong finetuned factuality baselines with comparable accuracy, including under distribution shifton question-answering and person biography generation. Our results demonstrate that long-form generations may be calibrated end-to-end by shifting objectives from the space of text to the those of downstream predictions.

Live content is unavailable. Log in and register to view live content