Calibrated Test-Time Guidance for Bayesian Inference
Daniel Geyfman ⋅ Felix Draxler ⋅ Jan Groeneveld ⋅ Hyunsoo Lee ⋅ Theofanis Karaletsos ⋅ Stephan Mandt
Abstract
Test-time guidance is a widely used mechanism for steering pre-trained diffusion models toward outcomes specified by a reward function. Existing approaches, however, focus on reward maximization rather than sampling from the true Bayesian posterior, leading to miscalibrated inference. In this work, we show that common test-time guidance methods do not recover the correct posterior distribution and identify the structural approximations responsible for this failure. We then propose consistent alternative estimators that enable calibrated sampling from the Bayesian posterior. Across Bayesian inference and inverse problems, our approach yields substantially improved posterior calibration.
Successful Page Load