Goal-Oriented Lower-Tail Calibration of Gaussian Processes for Bayesian Optimization
Aurélien Pion ⋅ Emmanuel Vazquez
Abstract
Bayesian optimization (BO) selects evaluation points for expensive black-box objectives using Gaussian process (GP) predictive distributions. Kernel choice and hyperparameter selection can lead to miscalibrated predictive distributions, which can distort the exploration--exploitation trade-off. In the minimization setting, sampling criteria such as expected improvement (EI) depend on the predictive lower tail and can therefore be sensitive to miscalibration. This article studies goal-oriented calibration of GP predictive distributions below a low threshold $t$ in the noiseless setting, complementing standard GP modeling with hyperparameters selected by maximum likelihood. A framework for predictive reliability below $t$ is introduced, based on two notions of spatial calibration: occurrence calibration over the design space and thresholded $\mu$-calibration on the sublevel set $\lbrace x\in\mathbb{X}, f(x)\le t \rbrace$. Building on this framework, we propose tcGP, a post-hoc method that calibrates GP predictive distributions below $t$, and we establish a convergence result for the resulting EI-based global optimization algorithm. Experiments on standard benchmarks show improved lower-tail calibration and BO performance relative to standard GP models and global calibration GP models.
Successful Page Load