Poster
Rates of Convergence for Sparse Variational Gaussian Process Regression
David Burt · Carl E Rasmussen · Mark van der Wilk
Pacific Ballroom #270
Keywords: [ Approximate Inference ] [ Bayesian Nonparametrics ] [ Gaussian Processes ]
Outstanding Paper |
Abstract:
Excellent variational approximations to Gaussian process posteriors have been developed which avoid the scaling with dataset size . They reduce the computational cost to , with the number of \emph{inducing variables}, which summarise the process. While the computational cost seems to be linear in , the true complexity of the algorithm depends on how must increase to ensure a certain quality of approximation. We show that with high probability the KL divergence can be made arbitrarily small by growing more slowly than . A particular case is that for regression with normally distributed inputs in D-dimensions with the Squared Exponential kernel, suffices. Our results show that as datasets grow, Gaussian process posteriors can be approximated cheaply, and provide a concrete rule for how to increase in continual learning scenarios.
Live content is unavailable. Log in and register to view live content