Timezone: »
Bayesian optimization (BO) is a popular paradigm for optimizing the hyperparameters of machine learning (ML) models due to its sample efficiency. Many ML models require running an iterative training procedure (e.g., stochastic gradient descent). This motivates the question whether information available during the training process (e.g., validation accuracy after each epoch) can be exploited for improving the epoch efficiency of BO algorithms by early-stopping model training under hyperparameter settings that will end up under-performing and hence eliminating unnecessary training epochs. This paper proposes to unify BO (specifically, Gaussian process-upper confidence bound (GP-UCB)) with Bayesian optimal stopping (BO-BOS) to boost the epoch efficiency of BO. To achieve this, while GP-UCB is sample-efficient in the number of function evaluations, BOS complements it with epoch efficiency for each function evaluation by providing a principled optimal stopping mechanism for early stopping. BO-BOS preserves the (asymptotic) no-regret performance of GP-UCB using our specified choice of BOS parameters that is amenable to an elegant interpretation in terms of the exploration-exploitation trade-off. We empirically evaluate the performance of BO-BOS and demonstrate its generality in hyperparameter optimization of ML models and two other interesting applications.
Author Information
Zhongxiang Dai (National University of Singapore)
Haibin Yu (National University of Singapore)
Bryan Kian Hsiang Low (National University of Singapore)
Patrick Jaillet (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Bayesian Optimization Meets Bayesian Optimal Stopping »
Wed Jun 12th 10:10 -- 10:15 PM Room Room 101
More from the Same Authors
-
2020 Poster: Learning Task-Agnostic Embedding of Multiple Black-Box Experts for Multi-Task Model Fusion »
Nghia Hoang · Thanh Lam · Bryan Kian Hsiang Low · Patrick Jaillet -
2020 Poster: R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games »
Zhongxiang Dai · Yizhou Chen · Bryan Kian Hsiang Low · Patrick Jaillet · Teck-Hua Ho -
2020 Poster: Private Outsourced Bayesian Optimization »
Dmitrii Kharkovskii · Zhongxiang Dai · Bryan Kian Hsiang Low -
2020 Poster: Collaborative Machine Learning with Incentive-Aware Model Rewards »
Rachael Hwee Ling Sim · Yehong Zhang · Mun Choon Chan · Bryan Kian Hsiang Low -
2019 Poster: Collective Model Fusion for Multiple Black-Box Experts »
Minh Hoang · Nghia Hoang · Bryan Kian Hsiang Low · Carleton Kingsford -
2019 Oral: Collective Model Fusion for Multiple Black-Box Experts »
Minh Hoang · Nghia Hoang · Bryan Kian Hsiang Low · Carleton Kingsford -
2017 Poster: Distributed Batch Gaussian Process Optimization »
Erik Daxberger · Bryan Kian Hsiang Low -
2017 Talk: Distributed Batch Gaussian Process Optimization »
Erik Daxberger · Bryan Kian Hsiang Low