Timezone: »

Bias-Free Scalable Gaussian Processes via Randomized Truncations
Andres Potapczynski · Luhuan Wu · Dan Biderman · Geoff Pleiss · John Cunningham

Thu Jul 22 05:35 PM -- 05:40 PM (PDT) @

Scalable Gaussian Process methods are computationally attractive, yet introduce modeling biases that require rigorous study. This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF). We find that both methods introduce a systematic bias on the learned hyperparameters: CG tends to underfit while RFF tends to overfit. We address these issues using randomized truncation estimators that eliminate bias in exchange for increased variance. In the case of RFF, we show that the bias-to-variance conversion is indeed a trade-off: the additional variance proves detrimental to optimization. However, in the case of CG, our unbiased learning procedure meaningfully outperforms its biased counterpart with minimal additional computation. Our code is available at https://github.com/ cunningham-lab/RTGPS.

Author Information

Andres Potapczynski (Columbia University)
Luhuan Wu (Columbia University)
Dan Biderman (Columbia University)
Geoff Pleiss (Columbia University)
John Cunningham (Columbia University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors