Skip to yearly menu bar Skip to main content


On the Finite-Time Performance of the Knowledge Gradient Algorithm

Yanwen Li · Siyang Gao

Hall E #1210

Keywords: [ OPT: Optimization and Learning under Uncertainty ] [ PM: Bayesian Models and Methods ] [ T: Learning Theory ] [ T: Optimization ] [ T: Online Learning and Bandits ]


The knowledge gradient (KG) algorithm is a popular and effective algorithm for the best arm identification (BAI) problem. Due to the complex calculation of KG, theoretical analysis of this algorithm is difficult, and existing results are mostly about the asymptotic performance of it, e.g., consistency, asymptotic sample allocation, etc. In this research, we present new theoretical results about the finite-time performance of the KG algorithm. Under independent and normally distributed rewards, we derive lower bounds and upper bounds for the probability of error and simple regret of the algorithm. With these bounds, existing asymptotic results become simple corollaries. We also show the performance of the algorithm for the multi-armed bandit (MAB) problem. These developments not only extend the existing analysis of the KG algorithm, but can also be used to analyze other improvement-based algorithms. Last, we use numerical experiments to further demonstrate the finite-time behavior of the KG algorithm.

Chat is not available.