Skip to yearly menu bar Skip to main content


Poster

Distinguishing the Knowable from the Unknowable with Language Models

Gustaf Ahdritz · Tian Qin · Nikhil Vyas · Boaz Barak · Benjamin Edelman

Hall C 4-9 #914
[ ] [ Paper PDF ]
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

We study the feasibility of identifying epistemic uncertainty (reflecting a lack of knowledge), as opposed to aleatoric uncertainty (reflecting entropy in the underlying distribution), in the outputs of large language models (LLMs) over free-form text. In the absence of ground-truth probabilities, we explore a setting where, in order to (approximately) disentangle a given LLM's uncertainty, a significantly larger model stands in as a proxy for the ground truth. We show that small linear probes trained on the embeddings of frozen, pretrained models accurately predict when larger models will be more confident at the token level and that probes trained on one text domain generalize to others. Going further, we propose a fully unsupervised method that achieves non-trivial accuracy on the same task. Taken together, we interpret these results as evidence that LLMs naturally contain internal representations of different types of uncertainty that could potentially be leveraged to devise more informative indicators of model confidence in diverse practical settings. Code can be found at: https://github.com/KempnerInstitute/llm_uncertainty

Chat is not available.