Scalable Training of Inference Networks for Gaussian-Process Models
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu

Wed Jun 12th 03:05 -- 03:10 PM @ Room 101

Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks (e.g., Bayesian neural networks) for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.

Author Information

Jiaxin Shi (Tsinghua University)
Emti Khan (RIKEN)
Jun Zhu (Tsinghua University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors