ALAS: Additive Learnable Alpha-Stable Kernels for Flexible Bayesian Optimization
Weibo Huang ⋅ Cheng Hua
Abstract
Bayesian Optimization is widely used for expensive black-box optimization, yet its success often depends on choosing a kernel that matches the objective’s unknown structure. In this work, we propose ALAS, a flexible Gaussian Process kernel family built from symmetric $\alpha$-stable spectral components. By learning the stability parameter $\alpha$, ALAS adapts its effective smoothness from data, capturing both smooth trends and sharp irregularities. We present two parameterizations: ALAS, a single stationary component with joint spectral modulation, and ALAS-Sep, a separable variant that learns dimension-wise tail behavior to improve robustness on approximately decomposable objectives. Experiments on standard benchmarks and real-world surrogates demonstrate strong and robust performance across diverse settings.
Successful Page Load