Timezone: »
We introduce a new framework for training deep generative models for high-dimensional conditional density estimation. The Bottleneck Conditional Density Estimator (BCDE) is a variant of the conditional variational autoencoder (CVAE) that employs layer(s) of stochastic variables as the bottleneck between the input x and target y, where both are high-dimensional. Crucially, we propose a new hybrid training method that blends the conditional generative model with a joint generative model. Hybrid blending is the key to effective training of the BCDE, which avoids overfitting and provides a novel mechanism for leveraging unlabeled data. We show that our hybrid training procedure enables models to achieve competitive results in the MNIST quadrant prediction task in the fully-supervised setting, and sets new benchmarks in the semi-supervised regime for MNIST, SVHN, and CelebA.
Author Information
Rui Shu (Stanford University)
Hung Bui (Adobe Research)
Mohammad Ghavamzadeh (Adobe Research & INRIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Bottleneck Conditional Density Estimation »
Mon. Aug 7th 08:30 AM -- 12:00 PM Room Gallery #95
More from the Same Authors
-
2022 Poster: Bit Prioritization in Variational Autoencoders via Progressive Coding »
Rui Shu · Stefano Ermon -
2022 Spotlight: Bit Prioritization in Variational Autoencoders via Progressive Coding »
Rui Shu · Stefano Ermon -
2021 Poster: Temporal Predictive Coding For Model-Based Planning In Latent Space »
Tung Nguyen · Rui Shu · Tuan Pham · Hung Bui · Stefano Ermon -
2021 Spotlight: Temporal Predictive Coding For Model-Based Planning In Latent Space »
Tung Nguyen · Rui Shu · Tuan Pham · Hung Bui · Stefano Ermon -
2020 Poster: Predictive Coding for Locally-Linear Control »
Rui Shu · Tung Nguyen · Yinlam Chow · Tuan Pham · Khoat Than · Mohammad Ghavamzadeh · Stefano Ermon · Hung Bui -
2020 Poster: Collapsed Amortized Variational Inference for Switching Nonlinear Dynamical Systems »
Zhe Dong · Bryan Seybold · Kevin Murphy · Hung Bui -
2020 Poster: Fair Generative Modeling via Weak Supervision »
Kristy Choi · Aditya Grover · Trisha Singh · Rui Shu · Stefano Ermon -
2017 Poster: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Poster: Multilevel Clustering via Wasserstein Means »
Nhat Ho · XuanLong Nguyen · Mikhail Yurochkin · Hung Bui · Viet Huynh · Dinh Phung -
2017 Talk: Multilevel Clustering via Wasserstein Means »
Nhat Ho · XuanLong Nguyen · Mikhail Yurochkin · Hung Bui · Viet Huynh · Dinh Phung -
2017 Poster: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt -
2017 Poster: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen -
2017 Talk: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Talk: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen -
2017 Talk: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt