Timezone: »

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces
Johannes Kirschner · Mojmir Mutny · Nicole Hiller · Rasmus Ischebeck · Andreas Krause

Thu Jun 13 05:10 PM -- 05:15 PM (PDT) @ Seaside Ballroom

Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. Our method scales well to high dimensions and makes use of a global Gaussian process model. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of a free electron laser with up to 40 parameters while satisfying the safe operation constraints.

Author Information

Johannes Kirschner (ETH Zurich)
Johannes Kirschner

Johannes Kirschner is a postdoc fellow with Csaba Szepesvari at the University of Alberta. His research is focused on algorithms for reinforcement learning, experimental design and data-driven decision making. Johannes' research interests span theoretical foundations to real-world applications. He is supported by an "Early Postdoc Mobility fellowship" of the Swiss National Foundation. Before joining the University of Alberta, Johannes obtained his PhD at ETH Zurich.

Mojmir Mutny (ETH Zurich)
Nicole Hiller (PSI)
Rasmus Ischebeck (PSI)
Andreas Krause (ETH Zurich)
Andreas Krause

Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center and Chair of the ETH AI Center, and co-founded the ETH spin-off LatticeFlow. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Max Planck Fellow at the Max Planck Institute for Intelligent Systems, an ELLIS Fellow, a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received the Rössler Prize, ERC Starting Investigator and ERC Consolidator grants, the German Pattern Recognition Award, an NSF CAREER award as well as the ETH Golden Owl teaching award. His research has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018, and currently serves as General Chair for ICML 2023 and as Action Editor for the Journal of Machine Learning Research.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors