Timezone: »
Probabilistic models are a central implement in machine learning practice. They form the basis for models that generate realistic data, uncover hidden structure, and make predictions. Traditionally, probabilistic models in machine learning have focused on prescribed models. Prescribed models specify a joint density over observed and hidden variables that can be easily evaluated. The requirement of a tractable density simplifies their learning but limits their flexibility --- several real world phenomena are better described by simulators that do not admit a tractable density. Probabilistic models defined only via the simulations they produce are called implicit models.
Arguably starting with generative adversarial networks, research on implicit models in machine learning has exploded in recent years. This workshop’s aim is to foster a discussion around the recent developments and future directions of implicit models.
Implicit models have many applications. They are used in ecology where models simulate animal populations over time; they are used in phylogeny, where simulations produce hypothetical ancestry trees; they are used in physics to generate particle simulations for high energy processes. Recently, implicit models have been used to improve the state-of-the-art in image and content generation. Part of the workshop’s focus is to discuss the commonalities among applications of implicit models.
Of particular interest at this workshop is to unite fields that work on implicit models. For example:
+ Generative adversarial networks (a NIPS 2016 workshop) are implicit models with an adversarial training scheme.
+ Recent advances in variational inference (a NIPS 2015 and 2016 workshop) have leveraged implicit models for more accurate approximations.
+ Approximate Bayesian computation (a NIPS 2015 workshop) focuses on posterior inference for models with implicit likelihoods.
+ Learning implicit models is deeply connected to two sample testing and density ratio estimation.
We hope to bring together these different views on implicit models, identifying their core challenges and combining their innovations.
We invite submission of 4 page papers for posters, contributed talks, and travel awards. Topics of interests are: implicit models, approximate Bayesian computation, generative adversarial networks, learning and inference for implicit models, implicit variational approximations, evaluation of implicit models and two sample testing. We encourage both theoretical and applied submissions.
Author Information
Rajesh Ranganath (NYU)
Ian Goodfellow (Google Brain)
Dustin Tran (Google Brain)
David Blei (Columbia University)
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.
Balaji Lakshminarayanan (Google Brain)
Shakir Mohamed (Google DeepMind)

Shakir Mohamed works on technical and sociotechnical questions in machine learning research, working on problems in machine learning principles, applied problems in healthcare and environment, and ethics and diversity. Shakir is a Director for Research at DeepMind in London, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and an Honorary Professor of University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots charity whose work is to build pan-African capacity and leadership in AI. Amongst other roles, Shakir served as the senior programme chair for ICLR 2021, and as the General Chair for NeurIPS 2022. Shakir also serves on the Board of Directors for some of the leading conferences in the field of machine learning and AI (ICML, ICLR, NeurIPS), is a member of the Royal Society diversity and inclusion committee, and on the international scientific advisory committee for the pan-Canadian AI strategy. Shakir is from South Africa, completed a postdoc at the University of British Columbia, received his PhD from the University of Cambridge, and received his masters and undergraduate degrees in Electrical and Information engineering from the University of the Witwatersrand, Johannesburg.
More from the Same Authors
-
2022 : Optimization-based Causal Estimation from Heterogenous Environments »
Mingzhang Yin · Yixin Wang · David Blei -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2023 : Causal-structure Driven Augmentations for Text OOD Generalization »
Amir Feder · Yoav Wald · Claudia Shi · Suchi Saria · David Blei -
2023 : Practical and Asymptotically Exact Conditional Sampling in Diffusion Models »
Brian Trippe · Luhuan Wu · Christian Naesseth · David Blei · John Cunningham -
2023 Poster: Scaling Vision Transformers to 22 Billion Parameters »
Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby -
2023 Poster: A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models »
James Allingham · JIE REN · Michael Dusenberry · Xiuye Gu · Yin Cui · Dustin Tran · Jeremiah Liu · Balaji Lakshminarayanan -
2023 Oral: Scaling Vision Transformers to 22 Billion Parameters »
Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby -
2023 Invited Talk: Machine Learning with Social Purpose »
Shakir Mohamed -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 : Reconstructing the Universe with Variational self-Boosted Sampling »
Chirag Modi · Yin Li · David Blei -
2022 : Uncertainty Quantification in Deep Learning »
Dustin Tran -
2022 Poster: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2022 Spotlight: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2021 : Harnessing Machine Learning to Achieve Net Zero »
Shakir Mohamed -
2021 : Uncertainty Modeling from 50M to 1B »
Dustin Tran -
2021 Poster: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2021 Poster: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Spotlight: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Oral: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2020 : Open discussion in breakout rooms »
Shakir Mohamed · Faris Gezahegn · ST John -
2020 Poster: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors »
Mike Dusenberry · Ghassen Jerfel · Yeming Wen · Yian Ma · Jasper Snoek · Katherine Heller · Balaji Lakshminarayanan · Dustin Tran -
2020 : Angelic Troublemaking for Queer Communities with Bisi Alimi »
Shakir Mohamed · Bisi Alimi -
2020 : Queering Machine Learning »
Shakir Mohamed -
2018 Poster: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Poster: Image Transformer »
Niki Parmar · Ashish Vaswani · Jakob Uszkoreit · Lukasz Kaiser · Noam Shazeer · Alexander Ku · Dustin Tran -
2018 Oral: Image Transformer »
Niki Parmar · Ashish Vaswani · Jakob Uszkoreit · Lukasz Kaiser · Noam Shazeer · Alexander Ku · Dustin Tran -
2018 Poster: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Poster: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2018 Poster: Learning Implicit Generative Models with the Method of Learned Moments »
Suman Ravuri · Shakir Mohamed · Mihaela Rosca · Oriol Vinyals -
2018 Oral: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Oral: Learning Implicit Generative Models with the Method of Learned Moments »
Suman Ravuri · Shakir Mohamed · Mihaela Rosca · Oriol Vinyals -
2018 Oral: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2017 Poster: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei -
2017 Poster: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Poster: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Talk: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei