Timezone: »
Scalable Gaussian Process methods are computationally attractive, yet introduce modeling biases that require rigorous study. This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF). We find that both methods introduce a systematic bias on the learned hyperparameters: CG tends to underfit while RFF tends to overfit. We address these issues using randomized truncation estimators that eliminate bias in exchange for increased variance. In the case of RFF, we show that the bias-to-variance conversion is indeed a trade-off: the additional variance proves detrimental to optimization. However, in the case of CG, our unbiased learning procedure meaningfully outperforms its biased counterpart with minimal additional computation. Our code is available at https://github.com/ cunningham-lab/RTGPS.
Author Information
Andres Potapczynski (Columbia University)
Luhuan Wu (Columbia University)
Dan Biderman (Columbia University)
Geoff Pleiss (Columbia University)
John Cunningham (Columbia University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Bias-Free Scalable Gaussian Processes via Randomized Truncations »
Fri. Jul 23rd 12:35 -- 12:40 AM Room
More from the Same Authors
-
2023 : Practical and Asymptotically Exact Conditional Sampling in Diffusion Models »
Brian Trippe · Luhuan Wu · Christian Naesseth · David Blei · John Cunningham -
2022 Poster: Preconditioning for Scalable Gaussian Process Hyperparameter Optimization »
Jonathan Wenger · Geoff Pleiss · Philipp Hennig · John Cunningham · Jacob Gardner -
2022 Poster: Variational nearest neighbor Gaussian process »
Luhuan Wu · Geoff Pleiss · John Cunningham -
2022 Oral: Preconditioning for Scalable Gaussian Process Hyperparameter Optimization »
Jonathan Wenger · Geoff Pleiss · Philipp Hennig · John Cunningham · Jacob Gardner -
2022 Spotlight: Variational nearest neighbor Gaussian process »
Luhuan Wu · Geoff Pleiss · John Cunningham