Timezone: »
Recent progress with conditional image diffusion models has been stunning, and this holds true whether we are speaking about models conditioned on a text description, a scene layout, or a sketch. Unconditional image diffusion models are also improving but lag behind, as do diffusion models which are conditioned on lower-dimensional features like class labels. We propose to close the gap between conditional and unconditional models using a two-stage sampling procedure. In the first stage we sample an embedding describing the semantic content of the image. In the second stage we sample the image conditioned on this embedding and then discard the embedding. Doing so lets us leverage the power of conditional diffusion models on the unconditional generation task, which we show improves FID by 25 - 50% compared to standard unconditional generation.
Author Information
William Harvey (University of British Columbia)
Frank Wood (UBC + inverted.ai)
More from the Same Authors
-
2023 : Scaling Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Hamed Shirzad · Frank Wood -
2023 Oral: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2023 Poster: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Oral: Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Frank Wood -
2023 Poster: Uncertain Evidence in Probabilistic Models and Stochastic Simulators »
Andreas Munk · Alexander Mead · Frank Wood -
2021 Poster: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2021 Oral: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2020 Poster: All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference »
Rob Brekelmans · Vaden Masrani · Frank Wood · Greg Ver Steeg · Aram Galstyan -
2019 Poster: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2019 Oral: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2018 Poster: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson -
2018 Oral: Deep Variational Reinforcement Learning for POMDPs »
Maximilian Igl · Luisa Zintgraf · Tuan Anh Le · Frank Wood · Shimon Whiteson