Timezone: »
Optimization of high-dimensional black-box functions is an extremely challenging problem. While Bayesian optimization has emerged as a popular approach for optimizing black-box functions, its applicability has been limited to low-dimensional problems due to its computational and statistical challenges arising from high-dimensional settings. In this paper, we propose to tackle these challenges by (1) assuming a latent additive structure in the function and inferring it properly for more efficient and effective BO, and (2) performing multiple evaluations in parallel to reduce the number of iterations required by the method. Our novel approach learns the latent structure with Gibbs sampling and constructs batched queries using determinantal point processes. Experimental validations on both synthetic and real-world functions demonstrate that the proposed method outperforms the existing state-of-the-art approaches.
Author Information
Zi Wang (MIT)
Chengtao Li (MIT)
Stefanie Jegelka (MIT)
Pushmeet Kohli (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Mon. Aug 7th 08:30 AM -- 12:00 PM Room Gallery #133
More from the Same Authors
-
2021 Town Hall: Town Hall »
John Langford · Marina Meila · Tong Zhang · Le Song · Stefanie Jegelka · Csaba Szepesvari -
2020 : Negative Dependence and Sampling »
Stefanie Jegelka -
2020 Poster: Strength from Weakness: Fast Learning Using Weak Supervision »
Joshua Robinson · Stefanie Jegelka · Suvrit Sra -
2019 Poster: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Oral: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2018 Poster: Representation Learning on Graphs with Jumping Knowledge Networks »
Keyulu Xu · Chengtao Li · Yonglong Tian · Tomohiro Sonobe · Ken-ichi Kawarabayashi · Stefanie Jegelka -
2018 Oral: Representation Learning on Graphs with Jumping Knowledge Networks »
Keyulu Xu · Chengtao Li · Yonglong Tian · Tomohiro Sonobe · Ken-ichi Kawarabayashi · Stefanie Jegelka -
2017 Poster: Max-value Entropy Search for Efficient Bayesian Optimization »
Zi Wang · Stefanie Jegelka -
2017 Poster: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton -
2017 Poster: Robust Budget Allocation via Continuous Submodular Functions »
Matthew J Staib · Stefanie Jegelka -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Poster: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Talk: Robust Budget Allocation via Continuous Submodular Functions »
Matthew J Staib · Stefanie Jegelka -
2017 Talk: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Talk: Max-value Entropy Search for Efficient Bayesian Optimization »
Zi Wang · Stefanie Jegelka -
2017 Poster: RobustFill: Neural Program Learning under Noisy I/O »
Jacob Devlin · Jonathan Uesato · Surya Bhupatiraju · Rishabh Singh · Abdelrahman Mohammad · Pushmeet Kohli -
2017 Talk: RobustFill: Neural Program Learning under Noisy I/O »
Jacob Devlin · Jonathan Uesato · Surya Bhupatiraju · Rishabh Singh · Abdelrahman Mohammad · Pushmeet Kohli