Timezone: »
Bayesian optimization (BO) is a sample-efficient approach for tuning design parameters to optimize expensive-to-evaluate, black-box performance metrics. In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected. Although BO methods have been proposed for optimizing a single objective under input noise, no existing method addresses the practical scenario where there are multiple objectives that are sensitive to input perturbations. In this work, we propose the first multi-objective BO method that is robust to input noise. We formalize our goal as optimizing the multivariate value-at-risk (MVaR), a risk measure of the uncertain objectives. Since directly optimizing MVaR is computationally infeasible in many settings, we propose a scalable, theoretically-grounded approach for optimizing MVaR using random scalarizations. Empirically, we find that our approach significantly outperforms alternative methods and efficiently identifies optimal robust designs that will satisfy specifications across multiple metrics with high probability.
Author Information
Samuel Daulton (Meta, University of Oxford)
I am a research scientist at Meta on the Core Data Science team, PhD candidate in machine learning at the University of Oxford, and co-creator of BoTorch---an open source library for Bayesian optimization research. Within Core Data Science, I work in the Adaptive Experimentation research group. I am a member of the Machine Learning Research Group at Oxford. During my PhD, I am working with Michael Osborne (Oxford), Eytan Bakshy (Meta), and Max Balandat (Meta). My research focuses on methods for principled, sample-efficient optimization including Bayesian optimization and transfer learning. I am particularly interested in practical methods for principled exploration (using probablistic models) that are are robust across applied problems and depend on few, if any, hyperparameters. Furthermore, I aim to democratize such methods by open sourcing reproducible code. Prior to joining Meta, I worked with Finale Doshi-Velez at Harvard University on efficient and robust methods for transfer learning.
Sait Cakmak (Meta)
I am a Research Scientist with the Adaptive Experimentation group at Meta. Prior to Meta, I was a Ph.D. student in Operations Research at Georgia Tech, advised by Dr. Enlu Zhou, and received my bachelor's degrees in Industrial Engineering and Economics from Koç University, Turkey. My PhD research focused on optimization of black-box or simulation-based objectives, using zeroth or first order information. In particular, I studied risk averse optimization of such objectives, where risk aversion is achieved by optimizing a risk measure of the objective, calculated over the unknown environmental variables. I have used tools from stochastic derivative estimation, stochastic approximation, Bayesian optimization and Gaussian processes, among others. In my free time, I enjoy walks with our puppy, Fin, annoying Python, the kitty, by trying to make her interact with Fin, occasional baking, biking, and various DIY projects involving woodworking and 3D printing.
Maximilian Balandat (Facebook)
Michael A Osborne (U Oxford)
Enlu Zhou
Eytan Bakshy (Meta)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Robust Multi-Objective Bayesian Optimization Under Input Noise »
Wed. Jul 20th through Thu the 21st Room Hall E #737
More from the Same Authors
-
2021 : Attacking Graph Classification via Bayesian Optimisation »
Xingchen Wan · Henry Kenlay · Binxin Ru · Arno Blaas · Michael A Osborne · Xiaowen Dong -
2021 : Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization »
David Eriksson · Pierce Chuang · Samuel Daulton · Peng Xia · Akshat Shrivastava · Arun Babu · Shicong Zhao · Ahmed A Aly · Ganesh Venkatesh · Maximilian Balandat -
2021 : Revisiting Design Choices in Offline Model Based Reinforcement Learning »
Cong Lu · Philip Ball · Jack Parker-Holder · Michael A Osborne · Stephen Roberts -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2021 Workshop: Challenges in Deploying and monitoring Machine Learning Systems »
Alessandra Tosi · Nathan Korda · Michael A Osborne · Stephen Roberts · Andrei Paleyes · Fariba Yousefi -
2021 Poster: Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces »
Xingchen Wan · Vu Nguyen · Huong Ha · Binxin Ru · Cong Lu · Michael A Osborne -
2021 Poster: Optimal Transport Kernels for Sequential and Parallel Neural Architecture Search »
Vu Nguyen · Tam Le · Makoto Yamada · Michael A Osborne -
2021 Spotlight: Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces »
Xingchen Wan · Vu Nguyen · Huong Ha · Binxin Ru · Cong Lu · Michael A Osborne -
2021 Spotlight: Optimal Transport Kernels for Sequential and Parallel Neural Architecture Search »
Vu Nguyen · Tam Le · Makoto Yamada · Michael A Osborne -
2020 Poster: Knowing The What But Not The Where in Bayesian Optimization »
Vu Nguyen · Michael A Osborne -
2020 Poster: Bayesian Optimisation over Multiple Continuous and Categorical Inputs »
Binxin Ru · Ahsan Alvi · Vu Nguyen · Michael A Osborne · Stephen Roberts -
2019 Poster: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne -
2019 Oral: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne -
2019 Poster: Automated Model Selection with Bayesian Quadrature »
Henry Chai · Jean-Francois Ton · Michael A Osborne · Roman Garnett -
2019 Poster: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation »
Ahsan Alvi · Binxin Ru · Jan-Peter Calliess · Stephen Roberts · Michael A Osborne -
2019 Poster: Toward Understanding the Importance of Noise in Training Neural Networks »
Mo Zhou · Tianyi Liu · Yan Li · Dachao Lin · Enlu Zhou · Tuo Zhao -
2019 Oral: Toward Understanding the Importance of Noise in Training Neural Networks »
Mo Zhou · Tianyi Liu · Yan Li · Dachao Lin · Enlu Zhou · Tuo Zhao -
2019 Oral: Automated Model Selection with Bayesian Quadrature »
Henry Chai · Jean-Francois Ton · Michael A Osborne · Roman Garnett -
2019 Oral: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Oral: Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation »
Ahsan Alvi · Binxin Ru · Jan-Peter Calliess · Stephen Roberts · Michael A Osborne -
2019 Poster: Fingerprint Policy Optimisation for Robust Reinforcement Learning »
Supratik Paul · Michael A Osborne · Shimon Whiteson -
2019 Oral: Fingerprint Policy Optimisation for Robust Reinforcement Learning »
Supratik Paul · Michael A Osborne · Shimon Whiteson -
2018 Poster: Fast Information-theoretic Bayesian Optimisation »
Binxin Ru · Michael A Osborne · Mark Mcleod · Diego Granziol -
2018 Poster: Optimization, fast and slow: optimally switching between local and Bayesian optimization »
Mark McLeod · Stephen Roberts · Michael A Osborne -
2018 Oral: Optimization, fast and slow: optimally switching between local and Bayesian optimization »
Mark McLeod · Stephen Roberts · Michael A Osborne -
2018 Oral: Fast Information-theoretic Bayesian Optimisation »
Binxin Ru · Michael A Osborne · Mark Mcleod · Diego Granziol