Timezone: »
Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values. Existing approaches for optimizing the HMC hyperparameters either optimize a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that can be very loose in practice. Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. Our objective can be easily optimized using stochastic gradient descent. We evaluate our proposed method and compare to baselines on a variety of problems including sampling from synthetic 2D distributions, reconstructing sparse signals, learning deep latent variable models and sampling molecular configurations from the Boltzmann distribution of a 22 atom molecule. We find that our method is competitive with or improves upon alternative baselines in all these experiments.
Author Information
Andrew Campbell (University of Oxford)
Wenlong Chen (Imperial College London)
Vincent Stimper (University of Cambridge)
Jose Miguel Hernandez-Lobato (University of Cambridge)
Yichuan Zhang (Boltzbit Limited)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization »
Thu. Jul 22nd 04:00 -- 06:00 PM Room
More from the Same Authors
-
2023 : Leveraging Task Structures for Improved Identifiability in Neural Network Representations »
Wenlin Chen · Julien Horwood · Juyeon Heo · Jose Miguel Hernandez-Lobato -
2023 : Minimal Random Code Learning with Mean-KL Parameterization »
Jihao Andreas Lin · Gergely Flamich · Jose Miguel Hernandez-Lobato -
2022 Poster: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Spotlight: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Poster: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Spotlight: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Poster: Fast Relative Entropy Coding with A* coding »
Gergely Flamich · Stratis Markou · Jose Miguel Hernandez-Lobato -
2022 Spotlight: Fast Relative Entropy Coding with A* coding »
Gergely Flamich · Stratis Markou · Jose Miguel Hernandez-Lobato -
2021 Poster: Active Slices for Sliced Stein Discrepancy »
Wenbo Gong · Kaibo Zhang · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2021 Spotlight: Active Slices for Sliced Stein Discrepancy »
Wenbo Gong · Kaibo Zhang · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2021 Poster: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2021 Spotlight: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2020 : "Latent Space Optimization with Deep Generative Models" »
Jose Miguel Hernandez-Lobato -
2020 : Invited Talk: Efficient Missing-value Acquisition with Variational Autoencoders »
Jose Miguel Hernandez-Lobato -
2020 Poster: Reinforcement Learning for Molecular Design Guided by Quantum Mechanics »
Gregor Simm · Robert Pinsler · Jose Miguel Hernandez-Lobato -
2020 Poster: A Generative Model for Molecular Distance Geometry »
Gregor Simm · Jose Miguel Hernandez-Lobato -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Poster: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Poster: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Poster: Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space »
Jose Miguel Hernandez-Lobato · James Requeima · Edward Pyzer-Knapp · Alan Aspuru-Guzik -
2017 Poster: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Talk: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Talk: Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space »
Jose Miguel Hernandez-Lobato · James Requeima · Edward Pyzer-Knapp · Alan Aspuru-Guzik -
2017 Poster: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck -
2017 Talk: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck