Skip to yearly menu bar Skip to main content


Poster

Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs

Daniel D. Johnson · Daniel Tarlow · David Duvenaud · Chris Maddison


Abstract: Identifying how much a model $\hat{p}\_{\scriptscriptstyle{Y|X}}^{\theta}$ knows about the stochastic real-world process $p\_{\scriptscriptstyle{Y|X}}$ it was trained on is important to ensure it avoids producing incorrect or "hallucinated" answers or taking unsafe actions. But this is difficult for generative models because probabilistic predictions do not distinguish between per-response noise (aleatoric uncertainty) and lack of knowledge about the process (epistemic uncertainty), and existing epistemic uncertainty quantification techniques tend to be overconfident when the model underfits. We propose a general strategy for teaching a model to both approximate $p\_{\scriptscriptstyle{Y|X}}$ and also estimate the remaining gaps between $\hat{p}_{\scriptscriptstyle{Y|X}}^{\theta}$ and $p\_{\scriptscriptstyle{Y|X}}$: train it to predict *pairs* of independent responses drawn from the true conditional distribution, allow it to "cheat" by observing one response while predicting the other, then measure how much it cheats. Remarkably, we prove that being good at cheating (i.e. cheating whenever it improves your prediction) is equivalent to being *second-order calibrated*, a principled extension of ordinary calibration that allows us to construct provably-correct frequentist confidence intervals for $p\_{\scriptscriptstyle{Y|X}}$ and detect incorrect responses with high probability. We demonstrate empirically that our approach accurately estimates how much models don't know across ambiguous image classification, (synthetic) language modeling, and partially-observable navigation tasks, outperforming existing techniques.

Live content is unavailable. Log in and register to view live content