Amortized Simulation-Based Inference in Generalized Bayes via Neural Posterior Estimation
Shiyi Sun ⋅ Geoff Nicholls ⋅ Jeong Lee
Abstract
Generalized Bayesian Inference (GBI) tempers a loss with a temperature $\beta>0$ to mitigate overconfidence and improve robustness under model misspecification, but existing GBI methods typically rely on costly MCMC or SDE-based samplers and must be re-run for each new dataset and each $\beta$-value. We give the first fully amortized variational approximation to the tempered posterior family $p_\beta(\theta\! \mid\! x) \propto \pi(\theta)p(x\! \mid\! \theta)^\beta$ by training a single $\beta$-conditioned neural posterior estimator $q_\phi(\theta \mid x, \beta)$ that enables sampling in a single forward pass, without simulator calls or inference-time MCMC. We introduce two complementary training routes: (i) synthesizes off-manifold samples $(\theta, x) \sim \pi(\theta)p(x \mid \theta)^\beta$ and (ii) reweights a fixed base dataset $\pi(\theta)p(x \mid \theta)$ using self-normalized importance sampling (SNIS), where we show that the SNIS-weighted objective provides a consistent forward-KL fit to the tempered posterior with finite weight variance. Across four standard simulation-based inference (SBI) benchmarks—including the chaotic Lorenz–96 system—our $\beta$-amortized estimator achieves competitive posterior approximations, in standard two-sample metrics, with non-amortized MCMC-based power-posterior samplers over a wide range of temperatures.
Successful Page Load