Timezone: »
This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs). The approach is based on two principles: (a) if the source randomness of the network is a continuous distribution (the ``semi-discrete'' setting), then the Wasserstein distance is realized by a deterministic optimal transport mapping; (b) given an optimal transport mapping between a generator network and a target distribution, the Wasserstein distance may be reduced via a regression between the generated data and the mapped target points. The procedure here therefore alternates these two steps, forming an optimal transport and regressing against it, gradually adjusting the generator network towards the target distribution. Mathematically, this approach is shown to both minimize the Wasserstein distance to both the empirical target distribution, and its underlying population counterpart. Empirically, good performance is demonstrated on the training and test sets of MNIST, and Thin-8 datasets. The paper closes with a discussion of the unsuitability of the Wasserstein distance for certain tasks, as has been identified in prior work (Arora et al., 2017; Huang et al., 2017).
Author Information
Yucheng Chen (University of Illinois at Urbana-Champaign)
Matus Telgarsky (UIUC)
Chao Zhang (University of Illinois, Urbana Champaign)
Bolton Bailey (University of Illinois)
Daniel Hsu (Columbia University)
Jian Peng (UIUC)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #4
More from the Same Authors
-
2021 : Early-stopped neural networks are consistent »
Ziwei Ji · Matus Telgarsky -
2021 : Coordinate-wise Control Variates for Deep Policy Gradients »
Yuanyi Zhong · Yuan Zhou · Jian Peng -
2022 : Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2022 : Is Self-Supervised Contrastive Learning More Robust Than Supervised Learning? »
Yuanyi Zhong · Haoran Tang · Junkun Chen · Jian Peng · Yu-Xiong Wang -
2023 Poster: DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design »
Jiaqi Guan · Xiangxin Zhou · Yuwei Yang · Yu Bao · Jian Peng · Jianzhu Ma · Qiang Liu · Liang Wang · Quanquan Gu -
2022 Poster: Off-Policy Reinforcement Learning with Delayed Rewards »
Beining Han · Zhizhou Ren · Zuofan Wu · Yuan Zhou · Jian Peng -
2022 Spotlight: Off-Policy Reinforcement Learning with Delayed Rewards »
Beining Han · Zhizhou Ren · Zuofan Wu · Yuan Zhou · Jian Peng -
2022 Poster: Proximal Exploration for Model-guided Protein Sequence Design »
Zhizhou Ren · Jiahan Li · Fan Ding · Yuan Zhou · Jianzhu Ma · Jian Peng -
2022 Poster: Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets »
Xingang Peng · Shitong Luo · Jiaqi Guan · Qi Xie · Jian Peng · Jianzhu Ma -
2022 Spotlight: Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets »
Xingang Peng · Shitong Luo · Jiaqi Guan · Qi Xie · Jian Peng · Jianzhu Ma -
2022 Spotlight: Proximal Exploration for Model-guided Protein Sequence Design »
Zhizhou Ren · Jiahan Li · Fan Ding · Yuan Zhou · Jianzhu Ma · Jian Peng -
2022 Poster: Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2022 Spotlight: Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2021 Poster: Fast margin maximization via dual acceleration »
Ziwei Ji · Nati Srebro · Matus Telgarsky -
2021 Spotlight: Fast margin maximization via dual acceleration »
Ziwei Ji · Nati Srebro · Matus Telgarsky -
2020 Poster: A Chance-Constrained Generative Framework for Sequence Optimization »
Xianggen Liu · Qiang Liu · Sen Song · Jian Peng -
2019 Poster: Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization »
Chengyue Gong · Jian Peng · Qiang Liu -
2019 Oral: Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization »
Chengyue Gong · Jian Peng · Qiang Liu -
2019 Poster: Teaching a black-box learner »
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu -
2019 Oral: Teaching a black-box learner »
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu -
2018 Poster: Learning to Explore via Meta-Policy Gradient »
Tianbing Xu · Qiang Liu · Liang Zhao · Jian Peng -
2018 Oral: Learning to Explore via Meta-Policy Gradient »
Tianbing Xu · Qiang Liu · Liang Zhao · Jian Peng -
2017 Poster: Neural networks and rational functions »
Matus Telgarsky -
2017 Talk: Neural networks and rational functions »
Matus Telgarsky