Timezone: »
Diffusion models have been successful on a range of conditional generation tasks including molecular design and text-to-image generation.However, these achievements have primarily depended on expensive, task-specific conditional training or error-prone heuristic approximations to them.Ideally, a conditional generation method should provide exact samples for a broad range of conditional distributions without requiring task-specific training.To this end, we introduce the Twisted Diffusion Sampler, or TDS, a sequential Monte Carlo (SMC) algorithm that targets the conditional distributions of diffusion models. The main idea is to use twisting, an SMC technique the enjoys good computational efficiency, to incorporate heuristic approximations without compromising asymptotic exactness. We study the properties of TDS on MNIST image inpainting and class-conditional generation tasks.TDS extends to Riemannian diffusion models, which are crucial for protein modeling.When applied to the motif-scaffolding problem, a core problem in protein design, TDS enables more flexible conditioning criteria than conditionally trained models, and provides state-of-the-art success rates on 9/12 problems in a benchmark set with scaffolds shorter than 100 residues.
Author Information
Brian Trippe (Columbia University)
Luhuan Wu (Columbia University)
Christian Naesseth (University of Amsterdam)
David Blei (Columbia University)
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.
John Cunningham (Columbia)
More from the Same Authors
-
2022 : Optimization-based Causal Estimation from Heterogenous Environments »
Mingzhang Yin · Yixin Wang · David Blei -
2023 : Causal-structure Driven Augmentations for Text OOD Generalization »
Amir Feder · Yoav Wald · Claudia Shi · Suchi Saria · David Blei -
2023 Poster: Gaussian processes at the Helm(holtz): A more fluid model for ocean currents »
Renato Berlinghieri · Brian Trippe · David Burt · Ryan Giordano · Kaushik Srinivasan · Tamay Özgökmen · Junfei Xia · Tamara Broderick -
2023 Poster: SE(3) diffusion model with application to protein backbone generation »
Jason Yim · Brian Trippe · Valentin De Bortoli · Emile Mathieu · Arnaud Doucet · Regina Barzilay · Tommi Jaakkola -
2022 : Reconstructing the Universe with Variational self-Boosted Sampling »
Chirag Modi · Yin Li · David Blei -
2022 Poster: Scaling Structured Inference with Randomization »
Yao Fu · John Cunningham · Mirella Lapata -
2022 Spotlight: Scaling Structured Inference with Randomization »
Yao Fu · John Cunningham · Mirella Lapata -
2022 Poster: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2022 Spotlight: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2022 Poster: Preconditioning for Scalable Gaussian Process Hyperparameter Optimization »
Jonathan Wenger · Geoff Pleiss · Philipp Hennig · John Cunningham · Jacob Gardner -
2022 Poster: Variational nearest neighbor Gaussian process »
Luhuan Wu · Geoff Pleiss · John Cunningham -
2022 Oral: Preconditioning for Scalable Gaussian Process Hyperparameter Optimization »
Jonathan Wenger · Geoff Pleiss · Philipp Hennig · John Cunningham · Jacob Gardner -
2022 Spotlight: Variational nearest neighbor Gaussian process »
Luhuan Wu · Geoff Pleiss · John Cunningham -
2021 Poster: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2021 Poster: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Poster: Bias-Free Scalable Gaussian Processes via Randomized Truncations »
Andres Potapczynski · Luhuan Wu · Dan Biderman · Geoff Pleiss · John Cunningham -
2021 Spotlight: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Oral: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2021 Spotlight: Bias-Free Scalable Gaussian Processes via Randomized Truncations »
Andres Potapczynski · Luhuan Wu · Dan Biderman · Geoff Pleiss · John Cunningham -
2020 Poster: The continuous categorical: a novel simplex-valued exponential family »
Elliott Gordon-Rodriguez · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Discriminative Regularization for Latent Variable Models with Applications to Electrocardiography »
Andrew Miller · Ziad Obermeyer · John Cunningham · Sendhil Mullainathan -
2019 Poster: The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions »
Raj Agrawal · Brian Trippe · Jonathan Huggins · Tamara Broderick -
2019 Oral: The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions »
Raj Agrawal · Brian Trippe · Jonathan Huggins · Tamara Broderick -
2019 Oral: Discriminative Regularization for Latent Variable Models with Applications to Electrocardiography »
Andrew Miller · Ziad Obermeyer · John Cunningham · Sendhil Mullainathan -
2019 Poster: LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations »
Brian Trippe · Jonathan Huggins · Raj Agrawal · Tamara Broderick -
2019 Oral: LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations »
Brian Trippe · Jonathan Huggins · Raj Agrawal · Tamara Broderick -
2018 Poster: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Poster: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Poster: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2018 Oral: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Oral: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2017 Workshop: Implicit Generative Models »
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed -
2017 Poster: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei -
2017 Poster: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Poster: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Talk: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei