Timezone: »
Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.
Author Information
Jiaxin Shi (Tsinghua University)
Mohammad Emtiyaz Khan (RIKEN)
Jun Zhu (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Scalable Training of Inference Networks for Gaussian-Process Models »
Wed. Jun 12th 10:05 -- 10:10 PM Room Room 101
More from the Same Authors
-
2022 Poster: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Spotlight: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Poster: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2022 Poster: GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing »
Zhongkai Hao · Chengyang Ying · Yinpeng Dong · Hang Su · Jian Song · Jun Zhu -
2022 Spotlight: GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing »
Zhongkai Hao · Chengyang Ying · Yinpeng Dong · Hang Su · Jian Song · Jun Zhu -
2022 Spotlight: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2021 : Invited talk2:Q&A »
Mohammad Emtiyaz Khan -
2020 Poster: Training Binary Neural Networks using the Bayesian Learning Rule »
Xiangming Meng · Roman Bachmann · Mohammad Emtiyaz Khan -
2020 Poster: Handling the Positive-Definite Constraint in the Bayesian Learning Rule »
Wu Lin · Mark Schmidt · Mohammad Emtiyaz Khan -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2020 Poster: Nonparametric Score Estimators »
Yuhao Zhou · Jiaxin Shi · Jun Zhu -
2019 Poster: Understanding and Accelerating Particle-Based Variational Inference »
Chang Liu · Jingwei Zhuo · Pengyu Cheng · RUIYI (ROY) ZHANG · Jun Zhu -
2019 Oral: Understanding and Accelerating Particle-Based Variational Inference »
Chang Liu · Jingwei Zhuo · Pengyu Cheng · RUIYI (ROY) ZHANG · Jun Zhu -
2019 Poster: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt -
2019 Poster: Understanding MCMC Dynamics as Flows on the Wasserstein Space »
Chang Liu · Jingwei Zhuo · Jun Zhu -
2019 Oral: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt -
2019 Oral: Understanding MCMC Dynamics as Flows on the Wasserstein Space »
Chang Liu · Jingwei Zhuo · Jun Zhu -
2018 Poster: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Oral: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Poster: Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam »
Mohammad Emtiyaz Khan · Didrik Nielsen · Voot Tangkaratt · Wu Lin · Yarin Gal · Akash Srivastava -
2018 Oral: Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam »
Mohammad Emtiyaz Khan · Didrik Nielsen · Voot Tangkaratt · Wu Lin · Yarin Gal · Akash Srivastava -
2018 Poster: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu