Timezone: »

Robust Multi-Objective Bayesian Optimization Under Input Noise
Samuel Daulton · Sait Cakmak · Maximilian Balandat · Michael A Osborne · Enlu Zhou · Eytan Bakshy

Wed Jul 20 02:35 PM -- 02:40 PM (PDT) @ Room 327 - 329

Bayesian optimization (BO) is a sample-efficient approach for tuning design parameters to optimize expensive-to-evaluate, black-box performance metrics. In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected. Although BO methods have been proposed for optimizing a single objective under input noise, no existing method addresses the practical scenario where there are multiple objectives that are sensitive to input perturbations. In this work, we propose the first multi-objective BO method that is robust to input noise. We formalize our goal as optimizing the multivariate value-at-risk (MVaR), a risk measure of the uncertain objectives. Since directly optimizing MVaR is computationally infeasible in many settings, we propose a scalable, theoretically-grounded approach for optimizing MVaR using random scalarizations. Empirically, we find that our approach significantly outperforms alternative methods and efficiently identifies optimal robust designs that will satisfy specifications across multiple metrics with high probability.

Author Information

Samuel Daulton (Meta, University of Oxford)
Samuel Daulton

I am a research scientist at Meta on the Core Data Science team, PhD candidate in machine learning at the University of Oxford, and co-creator of BoTorch---an open source library for Bayesian optimization research. Within Core Data Science, I work in the Adaptive Experimentation research group. I am a member of the Machine Learning Research Group at Oxford. During my PhD, I am working with Michael Osborne (Oxford), Eytan Bakshy (Meta), and Max Balandat (Meta). My research focuses on methods for principled, sample-efficient optimization including Bayesian optimization and transfer learning. I am particularly interested in practical methods for principled exploration (using probablistic models) that are are robust across applied problems and depend on few, if any, hyperparameters. Furthermore, I aim to democratize such methods by open sourcing reproducible code. Prior to joining Meta, I worked with Finale Doshi-Velez at Harvard University on efficient and robust methods for transfer learning.

Sait Cakmak (Meta)
Sait Cakmak

I am a Research Scientist with the Adaptive Experimentation group at Meta. Prior to Meta, I was a Ph.D. student in Operations Research at Georgia Tech, advised by Dr. Enlu Zhou, and received my bachelor's degrees in Industrial Engineering and Economics from Koç University, Turkey. My PhD research focused on optimization of black-box or simulation-based objectives, using zeroth or first order information. In particular, I studied risk averse optimization of such objectives, where risk aversion is achieved by optimizing a risk measure of the objective, calculated over the unknown environmental variables. I have used tools from stochastic derivative estimation, stochastic approximation, Bayesian optimization and Gaussian processes, among others. In my free time, I enjoy walks with our puppy, Fin, annoying Python, the kitty, by trying to make her interact with Fin, occasional baking, biking, and various DIY projects involving woodworking and 3D printing.

Maximilian Balandat (Facebook)
Michael A Osborne (U Oxford)
Enlu Zhou
Eytan Bakshy (Meta)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors