A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization

Andrew Campbell · Wenlong Chen · Vincent Stimper · Jose Miguel Hernandez-Lobato · Yichuan Zhang

Keywords: [ Monte Carlo Methods ]

[ Abstract ]
[ Paper ] [ Visit Poster at Spot B6 in Virtual World ]
Thu 22 Jul 9 a.m. PDT — 11 a.m. PDT
Spotlight presentation: Bayesian Methods
Thu 22 Jul 7 a.m. PDT — 8 a.m. PDT


Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values. Existing approaches for optimizing the HMC hyperparameters either optimize a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that can be very loose in practice. Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. Our objective can be easily optimized using stochastic gradient descent. We evaluate our proposed method and compare to baselines on a variety of problems including sampling from synthetic 2D distributions, reconstructing sparse signals, learning deep latent variable models and sampling molecular configurations from the Boltzmann distribution of a 22 atom molecule. We find that our method is competitive with or improves upon alternative baselines in all these experiments.

Chat is not available.