Timezone: »
The recently proposed Thermodynamic Variational Objective (TVO) leverages thermodynamic integration to provide a family of variational inference objectives, which both tighten and generalize the ubiquitous Evidence Lower Bound (ELBO). However, the tightness of TVO bounds was not previously known, an expensive grid search was used to choose a ``schedule'' of intermediate distributions, and model learning suffered with ostensibly tighter bounds. In this work, we propose an exponential family interpretation of the geometric mixture curve underlying the TVO and various path sampling methods, which allows us to characterize the gap in TVO likelihood bounds as a sum of KL divergences. We propose to choose intermediate distributions using equal spacing in the moment parameters of our exponential family, which matches grid search performance and allows the schedule to adaptively update over the course of training. Finally, we derive a doubly reparameterized gradient estimator which improves model learning and allows the TVO to benefit from more refined bounds. To further contextualize our contributions, we provide a unified framework for understanding thermodynamic integration and the TVO using Taylor series remainders.
Author Information
Rob Brekelmans (University of Southern California)
Vaden Masrani (University of British Columbia)
Frank Wood (University of British Columbia)
Greg Ver Steeg (University of Southern California)
Aram Galstyan (USC Information Sciences Institute)
More from the Same Authors
-
2023 : Visual Chain-of-Thought Diffusion Models »
William Harvey · Frank Wood -
2023 : Scaling Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Hamed Shirzad · Frank Wood -
2023 Oral: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2023 Poster: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Oral: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Poster: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2021 Poster: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2021 Oral: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2020 Poster: Improving generalization by controlling label-noise information in neural network weights »
Hrayr Harutyunyan · Kyle Reing · Greg Ver Steeg · Aram Galstyan -
2019 Poster: MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing »
Sami Abu-El-Haija · Bryan Perozzi · Amol Kapoor · Nazanin Alipourfard · Kristina Lerman · Hrayr Harutyunyan · Greg Ver Steeg · Aram Galstyan -
2019 Oral: MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing »
Sami Abu-El-Haija · Bryan Perozzi · Amol Kapoor · Nazanin Alipourfard · Kristina Lerman · Hrayr Harutyunyan · Greg Ver Steeg · Aram Galstyan -
2019 Poster: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2019 Oral: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2018 Poster: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson -
2018 Oral: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson