Timezone: »
Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. Our method scales well to high dimensions and makes use of a global Gaussian process model. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of a free electron laser with up to 40 parameters while satisfying the safe operation constraints.
Author Information
Johannes Kirschner (ETH Zurich)

Johannes Kirschner is a postdoc fellow with Csaba Szepesvari at the University of Alberta. His research is focused on algorithms for reinforcement learning, experimental design and data-driven decision making. Johannes' research interests span theoretical foundations to real-world applications. He is supported by an "Early Postdoc Mobility fellowship" of the Swiss National Foundation. Before joining the University of Alberta, Johannes obtained his PhD at ETH Zurich.
Mojmir Mutny (ETH Zurich)
Nicole Hiller (PSI)
Rasmus Ischebeck (PSI)
Andreas Krause (ETH Zurich)

Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center and Chair of the ETH AI Center, and co-founded the ETH spin-off LatticeFlow. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Max Planck Fellow at the Max Planck Institute for Intelligent Systems, an ELLIS Fellow, a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received the Rössler Prize, ERC Starting Investigator and ERC Consolidator grants, the German Pattern Recognition Award, an NSF CAREER award as well as the ETH Golden Owl teaching award. His research has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018, and currently serves as General Chair for ICML 2023 and as Action Editor for the Journal of Machine Learning Research.
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #147
More from the Same Authors
-
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Ya-Ping Hsieh · Charlotte Bunne · Marco Cuturi · Andreas Krause -
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Charlotte Bunne · Ya-Ping Hsieh · Marco Cuturi · Andreas Krause -
2023 : Anytime Model Selection in Linear Bandits »
Parnian Kassraie · Aldo Pacchiano · Nicolas Emmenegger · Andreas Krause -
2023 : Unbalanced Diffusion Schrödinger Bridge »
Matteo Pariset · Ya-Ping Hsieh · Charlotte Bunne · Andreas Krause · Valentin De Bortoli -
2023 : Aligned Diffusion Schrödinger Bridges »
Vignesh Ram Somnath · Matteo Pariset · Ya-Ping Hsieh · Maria Rodriguez Martinez · Andreas Krause · Charlotte Bunne -
2023 : Graph Neural Network Powered Bayesian Optimization for Large Molecular Spaces »
Miles Wang-Henderson · Bartu Soyuer · Parnian Kassraie · Andreas Krause · Ilija Bogunovic -
2023 : Anytime Model Selection in Linear Bandits »
Parnian Kassraie · Aldo Pacchiano · Nicolas Emmenegger · Andreas Krause -
2023 Panel: ICML Education Outreach Panel »
Andreas Krause · Barbara Engelhardt · Emma Brunskill · Kyunghyun Cho -
2022 : Opening Remarks »
Willie Neiswanger · Mojmir Mutny · Ilija Bogunovic -
2022 Workshop: Adaptive Experimental Design and Active Learning in the Real World »
Mojmir Mutny · Willie Neiswanger · Ilija Bogunovic · Stefano Ermon · Yisong Yue · Andreas Krause -
2022 Poster: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Spotlight: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Poster: Interactively Learning Preference Constraints in Linear Bandits »
David Lindner · Sebastian Tschiatschek · Katja Hofmann · Andreas Krause -
2022 Spotlight: Interactively Learning Preference Constraints in Linear Bandits »
David Lindner · Sebastian Tschiatschek · Katja Hofmann · Andreas Krause -
2022 Poster: Adaptive Gaussian Process Change Point Detection »
Edoardo Caldarelli · Philippe Wenk · Stefan Bauer · Andreas Krause -
2022 Poster: Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation »
Pier Giuseppe Sessa · Maryam Kamgarpour · Andreas Krause -
2022 Poster: Meta-Learning Hypothesis Spaces for Sequential Decision-making »
Parnian Kassraie · Jonas Rothfuss · Andreas Krause -
2022 Spotlight: Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation »
Pier Giuseppe Sessa · Maryam Kamgarpour · Andreas Krause -
2022 Spotlight: Meta-Learning Hypothesis Spaces for Sequential Decision-making »
Parnian Kassraie · Jonas Rothfuss · Andreas Krause -
2022 Spotlight: Adaptive Gaussian Process Change Point Detection »
Edoardo Caldarelli · Philippe Wenk · Stefan Bauer · Andreas Krause -
2021 : Data Summarization via Bilevel Coresets »
Andreas Krause -
2021 Poster: PopSkipJump: Decision-Based Attack for Probabilistic Classifiers »
Carl-Johann Simon-Gabriel · Noman Ahmed Sheikh · Andreas Krause -
2021 Spotlight: PopSkipJump: Decision-Based Attack for Probabilistic Classifiers »
Carl-Johann Simon-Gabriel · Noman Ahmed Sheikh · Andreas Krause -
2021 Poster: PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees »
Jonas Rothfuss · Vincent Fortuin · Martin Josifoski · Andreas Krause -
2021 Spotlight: PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees »
Jonas Rothfuss · Vincent Fortuin · Martin Josifoski · Andreas Krause -
2021 Poster: Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems »
Pier Giuseppe Sessa · Ilija Bogunovic · Andreas Krause · Maryam Kamgarpour -
2021 Spotlight: Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems »
Pier Giuseppe Sessa · Ilija Bogunovic · Andreas Krause · Maryam Kamgarpour -
2021 Poster: No-regret Algorithms for Capturing Events in Poisson Point Processes »
Mojmir Mutny · Andreas Krause -
2021 Poster: Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning »
Sebastian Curi · Ilija Bogunovic · Andreas Krause -
2021 Spotlight: No-regret Algorithms for Capturing Events in Poisson Point Processes »
Mojmir Mutny · Andreas Krause -
2021 Spotlight: Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning »
Sebastian Curi · Ilija Bogunovic · Andreas Krause -
2021 Poster: Bias-Robust Bayesian Optimization via Dueling Bandits »
Johannes Kirschner · Andreas Krause -
2021 Poster: Fast Projection Onto Convex Smooth Constraints »
Ilnura Usmanova · Maryam Kamgarpour · Andreas Krause · Kfir Levy -
2021 Spotlight: Fast Projection Onto Convex Smooth Constraints »
Ilnura Usmanova · Maryam Kamgarpour · Andreas Krause · Kfir Levy -
2021 Spotlight: Bias-Robust Bayesian Optimization via Dueling Bandits »
Johannes Kirschner · Andreas Krause -
2020 : Constrained Maximization of Lattice Submodular Functions »
Aytunc Sahin · Joachim Buhmann · Andreas Krause -
2020 Poster: From Sets to Multisets: Provable Variational Inference for Probabilistic Integer Submodular Models »
Aytunc Sahin · Yatao Bian · Joachim Buhmann · Andreas Krause -
2020 Test Of Time: Test of Time: Gaussian Process Optimization in the Bandit Settings: No Regret and Experimental Design »
Niranjan Srinivas · Andreas Krause · Sham Kakade · Matthias Seeger -
2019 Poster: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Oral: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Poster: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Poster: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Oral: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Oral: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2019 Oral: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2017 Poster: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Poster: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Poster: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Talk: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Talk: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause -
2017 Poster: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause