Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Projection Killer: peering through high dimensional posterior distribution

Marco Raveri · Cyrille Doux · Shivam Pandey

Keywords: [ Density Estimation ] [ posterior estimation ] [ Normalizing flows ]


Abstract:

Many modern applications of Bayesian inference, such as cosmology, are based on complicated forward models with high-dimensional parameter spaces. This considerably limits sampling of posterior distributions conditioned on observed data. In turn, this reduces the interpretability of posteriors to their one- and two-dimensional marginal distributions, when more information is available in the full dimensional distributions. We propose to learn smooth and differentiable representations of posterior distributions from their samples using normalizing flows, which we train with an added evidence error loss term, to extend interpretability in multiple ways. Motivated by problems from cosmology, we implement a robust method to obtain one and two-dimensional posterior profiles. These are obtained by optimizing, instead of integrating, over other parameters, and are thus less prone than marginals to so-called projection effects. We also demonstrate how this representation provides an accurate estimator of the Bayesian evidence, with log error at the 0.2 level, allowing accurate model comparison. We test our method on multi-modal mixtures of Gaussians up to dimension 32 before applying it to simulated cosmology examples.

Chat is not available.