Timezone: »

 
Workshop
Implicit Generative Models
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed

Wed Aug 09 03:30 PM -- 12:30 AM (PDT) @ Parkside 1
Event URL: https://sites.google.com/view/implicitmodels/ »

Probabilistic models are a central implement in machine learning practice. They form the basis for models that generate realistic data, uncover hidden structure, and make predictions. Traditionally, probabilistic models in machine learning have focused on prescribed models. Prescribed models specify a joint density over observed and hidden variables that can be easily evaluated. The requirement of a tractable density simplifies their learning but limits their flexibility --- several real world phenomena are better described by simulators that do not admit a tractable density. Probabilistic models defined only via the simulations they produce are called implicit models.

Arguably starting with generative adversarial networks, research on implicit models in machine learning has exploded in recent years. This workshop’s aim is to foster a discussion around the recent developments and future directions of implicit models.

Implicit models have many applications. They are used in ecology where models simulate animal populations over time; they are used in phylogeny, where simulations produce hypothetical ancestry trees; they are used in physics to generate particle simulations for high energy processes. Recently, implicit models have been used to improve the state-of-the-art in image and content generation. Part of the workshop’s focus is to discuss the commonalities among applications of implicit models.

Of particular interest at this workshop is to unite fields that work on implicit models. For example:

+ Generative adversarial networks (a NIPS 2016 workshop) are implicit models with an adversarial training scheme.

+ Recent advances in variational inference (a NIPS 2015 and 2016 workshop) have leveraged implicit models for more accurate approximations.

+ Approximate Bayesian computation (a NIPS 2015 workshop) focuses on posterior inference for models with implicit likelihoods.

+ Learning implicit models is deeply connected to two sample testing and density ratio estimation.

We hope to bring together these different views on implicit models, identifying their core challenges and combining their innovations.

We invite submission of 4 page papers for posters, contributed talks, and travel awards. Topics of interests are: implicit models, approximate Bayesian computation, generative adversarial networks, learning and inference for implicit models, implicit variational approximations, evaluation of implicit models and two sample testing. We encourage both theoretical and applied submissions.

Author Information

Rajesh Ranganath (NYU)
Ian Goodfellow (Google Brain)
Dustin Tran (Google Brain)
David Blei (Columbia University)

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.

Balaji Lakshminarayanan (Google Brain)
Shakir Mohamed (Google DeepMind)
Shakir Mohamed

Shakir Mohamed works on technical and sociotechnical questions in machine learning research, working on problems in machine learning principles, applied problems in healthcare and environment, and ethics and diversity. Shakir is a Director for Research at DeepMind in London, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and an Honorary Professor of University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots charity whose work is to build pan-African capacity and leadership in AI. Amongst other roles, Shakir served as the senior programme chair for ICLR 2021, and as the General Chair for NeurIPS 2022. Shakir also serves on the Board of Directors for some of the leading conferences in the field of machine learning and AI (ICML, ICLR, NeurIPS), is a member of the Royal Society diversity and inclusion committee, and on the international scientific advisory committee for the pan-Canadian AI strategy. Shakir is from South Africa, completed a postdoc at the University of British Columbia, received his PhD from the University of Cambridge, and received his masters and undergraduate degrees in Electrical and Information engineering from the University of the Witwatersrand, Johannesburg.

More from the Same Authors