Automated Model Selection with Bayesian Quadrature
Henry Chai · Jean-Francois Ton · Michael A Osborne · Roman Garnett

Wed Jun 12th 12:10 -- 12:15 PM @ Room 101

We present a novel techniques for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model. However, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior probability of a model. Our technique maximize the mutual information between this quantity and observations of the models' likelihoods, yielding efficient acquisition of samples across disparate model spaces when likelihood observations are limited. Our method produces more-accurate model posterior estimates using fewer model likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples.

Author Information

Henry Chai (Washington University in St. Louis)
Jef Ton (University of Oxford)
Michael A Osborne (U Oxford)
Roman Garnett (Washington University in St. Louis)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors