Timezone: »
Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions—a computational pipeline which is inefficient when the target function(s) are known upfront. In this paper, we address this inefficiency by introducing AMCI, a method for amortizing Monte Carlo integration directly. AMCI operates similarly to amortized inference but produces three distinct amortized proposals, each tailored to a different component of the overall expectation calculation. At runtime, samples are produced separately from each amortized proposal, before being combined to an overall estimate of the expectation. We show that while existing approaches are fundamentally limited in the level of accuracy they can achieve, AMCI can theoretically produce arbitrarily small errors for any integrable target function using only a single sample from each proposal at runtime. We further show that it is able to empirically outperform the theoretically optimal selfnormalized importance sampler on a number of example problems. Furthermore, AMCI allows not only for amortizing over datasets but also amortizing over target functions.
Author Information
Adam Golinski (University of Oxford)
Frank Wood (University of British Columbia)
Tom Rainforth (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Amortized Monte Carlo Integration »
Tue. Jun 11th 06:40 -- 07:00 PM Room Room 101
More from the Same Authors
-
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2023 : Visual Chain-of-Thought Diffusion Models »
William Harvey · Frank Wood -
2023 : Scaling Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Hamed Shirzad · Frank Wood -
2023 Oral: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2023 Poster: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design »
Desi Ivanova · Joel Jennings · Tom Rainforth · Cheng Zhang · Adam Foster -
2023 Oral: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Poster: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Yarin Gal · Tom Rainforth · Andreas Kirsch -
2021 Poster: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Poster: Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design »
Adam Foster · Desi Ivanova · ILYAS MALIK · Tom Rainforth -
2021 Poster: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Spotlight: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Oral: Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design »
Adam Foster · Desi Ivanova · ILYAS MALIK · Tom Rainforth -
2021 Spotlight: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Poster: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2021 Spotlight: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2021 Poster: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2021 Oral: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2020 : "Designing Bayesian-Optimal Experiments with Stochastic Gradients" »
Tom Rainforth -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference »
Rob Brekelmans · Vaden Masrani · Frank Wood · Greg Ver Steeg · Aram Galstyan -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2018 Poster: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Oral: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Poster: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson -
2018 Oral: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh