Improving Few-Shot Design Optimization By Exploiting Auxiliary Information
Arjun Mani ⋅ Carl Vondrick ⋅ Richard Zemel
Abstract
Many real-world design problems involve optimizing an expensive black-box function $f(x)$, for which Bayesian Optimization is a sample-efficient framework. However, while the basic black-box setting returns a scalar reward, real-world experiments often generate a wealth of useful information. We introduce a new setting where an experiment generates high-dimensional auxiliary information $h(x)$ along with $f(x)$; moreover, a history of relevant, previously-solved tasks is available for accelerating optimization. We develop a novel method based on a neural model which predicts $f(x)$ for unseen designs given a few-shot context containing observations of $h(x)$. We evaluate our method on two challenging domains, robotic hardware design and hyperparameter tuning. On both domains, our method achieves improved few-shot prediction and faster design optimization, outperforming several multi-task optimization methods.
Successful Page Load