Poster
Calibrated Approximate Bayesian Inference
Hanwen Xing · Geoff Nicholls · Jeong Lee

Tue Jun 11th 06:30 -- 09:00 PM @ Pacific Ballroom #211

We give a general purpose computational framework for estimating the bias in coverage resulting from making approximations in Bayesian inference. Coverage is the probability credible sets cover true parameter values. We show how to estimate the actual coverage an approximation scheme achieves when the ideal observation model and the prior can be simulated, but have been replaced, in the Monte Carlo, with approximations as they are intractable. Coverage estimation procedures given in Lee et al. (2018) work well on simple problems, but are biased, and do not scale well, as those authors note. For example, the methods of Lee et al. (2018) fail for calibration of an approximate completely collapsed MCMC algorithm for partition structure in a Dirichlet process for clustering group labels in a hierarchical model. By exploiting the symmetry of the coverage error under permutation of low level group labels and smoothing with Bayesian Additive Regression Trees, we are able to show that the original approximate inference had poor coverage and should not be trusted.

Author Information

Hanwen Xing (University of Oxford)
Geoff Nicholls (University of Oxford)
Jeong Lee (University of Auckland)

Related Events (a corresponding poster, oral, or spotlight)