MIRA: A Score for Conditional Distribution Accuracy and Model Comparison
Abstract
We present Mira, a method for estimating the expected probability that samples from a candidate conditional distribution match the true, unknown conditional distribution, for which only data-label pairs are available. We derive theoretical bounds obtained when the candidate distribution matches the true one and when the conditional distributions are independent. This framework thus enables model comparison by quantifying the alignment between the conditional distribution of a candidate model and the data-label pairs of the true model. Consequently, Mira enables Bayesian model comparison through direct posterior validation, bypassing the challenging evidence computation. We demonstrate its effectiveness across several toy problems and Bayesian inference tasks.