Timezone: »
Bayesian optimization is renowned for its sample efficiency but its application to higher dimensional tasks is impeded by its focus on global optimization. To scale to higher dimensional problems, we leverage the sample efficiency of Bayesian optimization in a local context. The optimization of the acquisition function is restricted to the vicinity of a Gaussian search distribution which is moved towards high value areas of the objective. The proposed information-theoretic update of the search distribution results in a Bayesian interpretation of local stochastic search: the search distribution encodes prior knowledge on the optimum’s location and is weighted at each iteration by the likelihood of this location’s optimality. We demonstrate the effectiveness of our algorithm on several benchmark objective functions as well as a continuous robotic task in which an informative prior is obtained by imitation learning.
Author Information
Riadh Akrour (TU Darmstadt)
Dmitry Sorokin
Jan Peters (TU Darmstadt)
Gerhard Neumann (University of Lincoln)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Local Bayesian Optimization of Motor Skills »
Wed. Aug 9th 01:42 -- 02:00 AM Room C4.5
More from the Same Authors
-
2019 Poster: Projections for Approximate Policy Iteration Algorithms »
Riad Akrour · Joni Pajarinen · Jan Peters · Gerhard Neumann -
2019 Oral: Projections for Approximate Policy Iteration Algorithms »
Riad Akrour · Joni Pajarinen · Jan Peters · Gerhard Neumann -
2018 Poster: Efficient Gradient-Free Variational Inference using Policy Search »
Oleg Arenz · Gerhard Neumann · Mingjun Zhong -
2018 Oral: Efficient Gradient-Free Variational Inference using Policy Search »
Oleg Arenz · Gerhard Neumann · Mingjun Zhong