Skip to yearly menu bar Skip to main content


Distributionally-Aware Kernelized Bandit Problems for Risk Aversion

Sho Takemori

Hall E #1311

Keywords: [ PM: Gaussian Processes ] [ MISC: Online Learning, Active Learning and Bandits ] [ OPT: Zero-order and Black-box Optimization ] [ T: Online Learning and Bandits ]


The kernelized bandit problem is a theoretically justified framework and has solid applications to various fields. Recently, there is a growing interest in generalizing the problem to the optimization of risk-averse metrics such as Conditional Value-at-Risk (CVaR) or Mean-Variance (MV).However, due to the model assumption, most existing methods need explicit design of environment random variables and can incur large regret because of possible high dimensionality of them.To address the issues, in this paper, we model environments using a family of the output distributions (or more precisely, probability kernel) and Kernel Mean Embeddings (KME), and provide novel UCB-type algorithms for CVaR and MV.Moreover, we provide algorithm-independent lower bounds for CVaR in the case of Mat\'ern kernels, and propose a nearly optimal algorithm.Furthermore, we empirically verify our theoretical result in synthetic environments, and demonstrate that our proposed method significantly outperforms a baseline in many cases.

Chat is not available.