Talk
Multi-fidelity Bayesian Optimisation with Continuous Approximations
kirthevasan kandasamy · Gautam Dasarathy · Barnabás Póczos · Jeff Schneider
C4.1
[
Abstract
]
Abstract:
Bandit methods for black-box optimisation, such as Bayesian optimisation,
are used in a variety of applications including hyper-parameter tuning and
experiment design.
Recently, \emph{multi-fidelity} methods have garnered
considerable attention since function evaluations have become increasingly expensive in
such applications.
Multi-fidelity methods use cheap approximations to the function of
interest to speed up the overall optimisation process.
However, most multi-fidelity methods assume only a finite number of approximations.
On the other hand, in many practical applications, a continuous spectrum of approximations might be
available.
For instance, when tuning an expensive neural network, one might choose to approximate the
cross validation performance using less data $N$ and/or few training iterations $T$.
Here, the approximations are best viewed as arising out of a continuous two dimensional
space $(N,T)$.
In this work, we develop a Bayesian optimisation method, \boca, for this setting.
We characterise its theoretical properties and show that it achieves better regret than
than strategies which ignore the approximations.
\bocas outperforms several other baselines in synthetic and real experiments.
Live content is unavailable. Log in and register to view live content