Timezone: »
Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximum-likelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization.
Author Information
Mohammad Emtiyaz Khan (RIKEN)
Didrik Nielsen (RIKEN)
Voot Tangkaratt (RIKEN AIP)
Wu Lin (University of British Columbia)
Yarin Gal (University of OXford)
Akash Srivastava (MIT, IBM)
I’m a PhD student in the Informatics Forum, University of Edinburgh. I work with Dr Charles Sutton and Dr Michael U. Gutmann on variational inference for generative models using deep learning.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #190
More from the Same Authors
-
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2023 Poster: DiscoBAX - Discovery of optimal intervention sets in genomic experiment design »
Clare Lyle · Arash Mehrjou · Pascal Notin · Andrew Jesson · Stefan Bauer · Yarin Gal · Patrick Schwab -
2023 Poster: Differentiable Multi-Target Causal Bayesian Experimental Design »
Panagiotis Tigas · Yashas Annadani · Desi Ivanova · Andrew Jesson · Yarin Gal · Adam Foster · Stefan Bauer -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 Poster: Learning Dynamics and Generalization in Deep Reinforcement Learning »
Clare Lyle · Mark Rowland · Will Dabney · Marta Kwiatkowska · Yarin Gal -
2022 Poster: Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt »
Sören Mindermann · Jan Brauner · Muhammed Razzak · Mrinank Sharma · Andreas Kirsch · Winnie Xu · Benedikt Höltgen · Aidan Gomez · Adrien Morisot · Sebastian Farquhar · Yarin Gal -
2022 Spotlight: Learning Dynamics and Generalization in Deep Reinforcement Learning »
Clare Lyle · Mark Rowland · Will Dabney · Marta Kwiatkowska · Yarin Gal -
2022 Spotlight: Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt »
Sören Mindermann · Jan Brauner · Muhammed Razzak · Mrinank Sharma · Andreas Kirsch · Winnie Xu · Benedikt Höltgen · Aidan Gomez · Adrien Morisot · Sebastian Farquhar · Yarin Gal -
2021 : Structured second-order methods via natural-gradient descent »
Wu Lin -
2021 : Invited talk2:Q&A »
Mohammad Emtiyaz Khan -
2021 Poster: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2021 Spotlight: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2020 Poster: Training Binary Neural Networks using the Bayesian Learning Rule »
Xiangming Meng · Roman Bachmann · Mohammad Emtiyaz Khan -
2020 Poster: Handling the Positive-Definite Constraint in the Bayesian Learning Rule »
Wu Lin · Mark Schmidt · Mohammad Emtiyaz Khan -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2019 Poster: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2019 Oral: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2019 Poster: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Poster: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt -
2019 Oral: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Oral: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt