Timezone: »
By building upon the recent theory that established the connection between implicit generative modeling (IGM) and optimal transport, in this study, we propose a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them. The proposed algorithm is based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes. We formulate the problem as a gradient flow in the space of probability measures. The connections between gradient flows and stochastic differential equations let us develop a computationally efficient algorithm for solving the optimization problem. We provide formal theoretical analysis where we prove finite-time error guarantees for the proposed algorithm. To the best of our knowledge, the proposed algorithm is the first nonparametric IGM algorithm with explicit theoretical guarantees. Our experimental results support our theory and show that our algorithm is able to successfully capture the structure of different types of data distributions.
Author Information
Antoine Liutkus (Inria)
Umut Simsekli (Telecom ParisTech)
Szymon Majewski (IMPAN)
Alain Durmus (ENS)
Fabian-Robert Stöter (Inria)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions »
Thu. Jun 13th 04:00 -- 04:20 PM Room Room 101
More from the Same Authors
-
2021 Poster: Relative Positional Encoding for Transformers with Linear Complexity »
Antoine Liutkus · Ondřej Cífka · Shih-Lun Wu · Umut Simsekli · Yi-Hsuan Yang · Gaël RICHARD -
2021 Oral: Relative Positional Encoding for Transformers with Linear Complexity »
Antoine Liutkus · Ondřej Cífka · Shih-Lun Wu · Umut Simsekli · Yi-Hsuan Yang · Gaël RICHARD -
2020 Poster: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise »
Umut Simsekli · Lingjiong Zhu · Yee-Whye Teh · Mert Gurbuzbalaban -
2019 Poster: Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization »
Thanh Huy Nguyen · Umut Simsekli · Gaël RICHARD -
2019 Poster: A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks »
Umut Simsekli · Levent Sagun · Mert Gurbuzbalaban -
2019 Oral: A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks »
Umut Simsekli · Levent Sagun · Mert Gurbuzbalaban -
2019 Oral: Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization »
Thanh Huy Nguyen · Umut Simsekli · Gaël RICHARD -
2018 Poster: Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization »
Umut Simsekli · Cagatay Yildiz · Thanh Huy Nguyen · Ali Taylan Cemgil · Gaël RICHARD -
2018 Oral: Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization »
Umut Simsekli · Cagatay Yildiz · Thanh Huy Nguyen · Ali Taylan Cemgil · Gaël RICHARD -
2017 Poster: Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for MCMC »
Umut Simsekli -
2017 Talk: Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for MCMC »
Umut Simsekli