Timezone: »
Sliced-Wasserstein Flow (SWF) is a promising approach to nonparametric generative modeling but has not been widely adopted due to its suboptimal generative quality and lack of conditional modeling capabilities. In this work, we make two major contributions to bridging this gap. First, based on a pleasant observation that (under certain conditions) the SWF of joint distributions coincides with those of conditional distributions, we propose Conditional Sliced-Wasserstein Flow (CSWF), a simple yet effective extension of SWF that enables nonparametric conditional modeling. Second, we introduce appropriate inductive biases of images into SWF with two techniques inspired by local connectivity and multiscale representation in vision research, which greatly improve the efficiency and quality of modeling images. With all the improvements, we achieve generative performance comparable with many deep parametric generative models on both conditional and unconditional tasks in a purely nonparametric fashion, demonstrating its great potential.
Author Information
Chao Du (Sea AI Lab)
Tianbo Li (SAIL)
Tianyu Pang (Sea AI Lab)
https://scholar.google.com/citations?user=wYDbtFsAAAAJ&hl=en
Shuicheng YAN
Min Lin (Sea AI Lab)
More from the Same Authors
-
2021 : Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks »
Xiao Yang · Yinpeng Dong · Tianyu Pang -
2022 : $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks »
Tianyu Pang · Shuicheng Yan · Min Lin -
2023 Poster: Improving Adversarial Robustness of Deep Equilibrium Models with Explicit Regulations Along the Neural Dynamics »
Zonghan Yang · Peng Li · Tianyu Pang · Yang Liu -
2023 Poster: Better Diffusion Models Further Improve Adversarial Training »
Zekai Wang · Tianyu Pang · Chao Du · Min Lin · Weiwei Liu · Shuicheng YAN -
2023 Poster: Bag of Tricks for Training Data Extraction from Language Models »
Weichen Yu · Tianyu Pang · Qian Liu · Chao Du · Bingyi Kang · Yan Huang · Min Lin · Shuicheng YAN -
2022 Poster: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Spotlight: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2019 Poster: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Oral: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Poster: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2018 Poster: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu