Timezone: »
Chemical space is so large that brute force searches for new interesting molecules are infeasible. High-throughput virtual screening via computer cluster simulations can speed up the discovery process by collecting very large amounts of data in parallel, e.g., up to hundreds or thousands of parallel measurements. Bayesian optimization (BO) can produce additional acceleration by sequentially identifying the most useful simulations or experiments to be performed next. However, current BO methods cannot scale to the large numbers of parallel measurements and the massive libraries of molecules currently used in high-throughput screening. Here, we propose a scalable solution based on a parallel and distributed implementation of Thompson sampling (PDTS). We show that, in small scale problems, PDTS performs similarly as parallel expected improvement (EI), a batch version of the most widely used BO heuristic. Additionally, in settings where parallel EI does not scale, PDTS outperforms other scalable baselines such as a greedy search, epsilon-greedy approaches and a random search method. These results show that PDTS is a successful solution for large-scale parallel BO.
Author Information
Jose Miguel Hernandez-Lobato (University of Cambridge)
James Requeima (University of Cambridge)
Edward Pyzer-Knapp (IBM)
Alan Aspuru-Guzik (University of Toronto)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space »
Wed. Aug 9th 01:42 -- 02:00 AM Room C4.1
More from the Same Authors
-
2023 : Leveraging Task Structures for Improved Identifiability in Neural Network Representations »
Wenlin Chen · Julien Horwood · Juyeon Heo · Jose Miguel Hernandez-Lobato -
2023 : Minimal Random Code Learning with Mean-KL Parameterization »
Jihao Andreas Lin · Gergely Flamich · Jose Miguel Hernandez-Lobato -
2022 : Alan Aspuru-Guzik, University of Toronto »
Alan Aspuru-Guzik -
2022 Poster: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Spotlight: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Poster: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Spotlight: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Poster: Fast Relative Entropy Coding with A* coding »
Gergely Flamich · Stratis Markou · Jose Miguel Hernandez-Lobato -
2022 Spotlight: Fast Relative Entropy Coding with A* coding »
Gergely Flamich · Stratis Markou · Jose Miguel Hernandez-Lobato -
2021 Poster: Active Slices for Sliced Stein Discrepancy »
Wenbo Gong · Kaibo Zhang · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2021 Spotlight: Active Slices for Sliced Stein Discrepancy »
Wenbo Gong · Kaibo Zhang · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2021 Poster: A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization »
Andrew Campbell · Wenlong Chen · Vincent Stimper · Jose Miguel Hernandez-Lobato · Yichuan Zhang -
2021 Spotlight: A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization »
Andrew Campbell · Wenlong Chen · Vincent Stimper · Jose Miguel Hernandez-Lobato · Yichuan Zhang -
2021 Poster: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2021 Spotlight: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2020 : "Latent Space Optimization with Deep Generative Models" »
Jose Miguel Hernandez-Lobato -
2020 : Invited Talk: Efficient Missing-value Acquisition with Variational Autoencoders »
Jose Miguel Hernandez-Lobato -
2020 Poster: Reinforcement Learning for Molecular Design Guided by Quantum Mechanics »
Gregor Simm · Robert Pinsler · Jose Miguel Hernandez-Lobato -
2020 Poster: A Generative Model for Molecular Distance Geometry »
Gregor Simm · Jose Miguel Hernandez-Lobato -
2020 Poster: TaskNorm: Rethinking Batch Normalization for Meta-Learning »
John Bronskill · Jonathan Gordon · James Requeima · Sebastian Nowozin · Richard E Turner -
2019 : Networking Lunch (provided) + Poster Session »
Abraham Stanway · Alex Robson · Aneesh Rangnekar · Ashesh Chattopadhyay · Ashley Pilipiszyn · Benjamin LeRoy · Bolong Cheng · Ce Zhang · Chaopeng Shen · Christian Schroeder · Christian Clough · Clement DUHART · Clement Fung · Cozmin Ududec · Dali Wang · David Dao · di wu · Dimitrios Giannakis · Dino Sejdinovic · Doina Precup · Duncan Watson-Parris · Gege Wen · George Chen · Gopal Erinjippurath · Haifeng Li · Han Zou · Herke van Hoof · Hillary A Scannell · Hiroshi Mamitsuka · Hongbao Zhang · Jaegul Choo · James Wang · James Requeima · Jessica Hwang · Jinfan Xu · Johan Mathe · Jonathan Binas · Joonseok Lee · Kalai Ramea · Kate Duffy · Kevin McCloskey · Kris Sankaran · Lester Mackey · Letif Mones · Loubna Benabbou · Lynn Kaack · Matthew Hoffman · Mayur Mudigonda · Mehrdad Mahdavi · Michael McCourt · Mingchao Jiang · Mohammad Mahdi Kamani · Neel Guha · Niccolo Dalmasso · Nick Pawlowski · Nikola Milojevic-Dupont · Paulo Orenstein · Pedram Hassanzadeh · Pekka Marttinen · Ramesh Nair · Sadegh Farhang · Samuel Kaski · Sandeep Manjanna · Sasha Luccioni · Shuby Deshpande · Soo Kim · Soukayna Mouatadid · Sunghyun Park · Tao Lin · Telmo Felgueira · Thomas Hornigold · Tianle Yuan · Tom Beucler · Tracy Cui · Volodymyr Kuleshov · Wei Yu · yang song · Ydo Wexler · Yoshua Bengio · Zhecheng Wang · Zhuangfang Yi · Zouheir Malki -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Poster: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Poster: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Poster: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Talk: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Poster: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck -
2017 Talk: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck