Timezone: »
Natural-gradient methods enable fast and simple algorithms for variational inference, but due to computational difficulties, their use is mostly limited to minimal exponential-family (EF) approximations. In this paper, we extend the application of natural-gradient methods to estimate structured approximations such as mixture of EF distribution. Such approximations can fit complex, multimodal posterior distributions and are generally more accurate than unimodal EF approximations. By using a minimal conditional-EF representation of such approximations, we derive simple natural-gradient updates. Our empirical results demonstrate a faster convergence of our natural-gradient method compared to black-box gradient-based methods. Our work expands the scope of natural gradients for Bayesian inference and makes them more widely applicable than before.
Author Information
Wu Lin (University of British Columbia)
Mohammad Emtiyaz Khan (RIKEN)
Mark Schmidt (University of British Columbia)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #217
More from the Same Authors
-
2023 Poster: Target-based Surrogates for Stochastic Optimization »
Jonathan Lavington · Sharan Vaswani · Reza Babanezhad · Mark Schmidt · Nicolas Le Roux -
2023 Poster: Simplifying Momentum-based Positive-definite Submanifold Optimization with Applications to Deep Learning »
Wu Lin · Valentin Duruisseaux · Melvin Leok · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2023 Poster: Let's Make Block Coordinate Descent Converge Faster: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence »
Julie Nutini · Issam Laradji · Mark Schmidt -
2021 : Structured second-order methods via natural-gradient descent »
Wu Lin -
2021 : Invited talk2:Q&A »
Mohammad Emtiyaz Khan -
2021 Poster: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2021 Spotlight: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2021 Poster: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2021 Oral: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2020 Poster: Training Binary Neural Networks using the Bayesian Learning Rule »
Xiangming Meng · Roman Bachmann · Mohammad Emtiyaz Khan -
2020 Poster: Handling the Positive-Definite Constraint in the Bayesian Learning Rule »
Wu Lin · Mark Schmidt · Mohammad Emtiyaz Khan -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2019 Poster: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2019 Oral: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2018 Poster: Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam »
Mohammad Emtiyaz Khan · Didrik Nielsen · Voot Tangkaratt · Wu Lin · Yarin Gal · Akash Srivastava -
2018 Oral: Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam »
Mohammad Emtiyaz Khan · Didrik Nielsen · Voot Tangkaratt · Wu Lin · Yarin Gal · Akash Srivastava -
2017 Poster: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt -
2017 Talk: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt