A broad range of cross-$m$-domain generation researches boil down to matching a joint distribution by deep generative models (DGMs). Hitherto algorithms excel in pairwise domains while as $m$ increases, remain struggling to scale themselves to ﬁt a joint distribution. In this paper, we propose a domain-scalable DGM, i.e., MMI-ALI for $m$-domain joint distribution matching. As an $m$-domain ensemble model of ALIs (Dumoulin et al., 2016), MMI-ALI is adversarially trained with maximizing Multivariate Mutual Information (MMI) w.r.t. joint variables of each pair of domains and their shared feature. The negative MMIs are upper bounded by a series of feasible losses provably leading to matching $m$-domain joint distributions. MMI-ALI linearly scales as $m$ increases and thus, strikes a right balance between efﬁcacy and scalability. We evaluate MMI-ALI in diverse challenging $m$-domain scenarios and verify its superiority.
Ziliang Chen (Sun Yat-sen University)
ZHANFU YANG (Purdue University)
I am a first-year Graduate student at Purdue University, have one paper getting accepted by the ICML 2019. My personal interest fields are Machine Learning, Artificial Intelligence Security, Distributed and Parallel Computing, Quantum Computing.
Xiaoxi Wang (Sun Yat-sen University)
Xiaodan Liang (Sun Yat-sen University)
xiaopeng yan (SYSU)
Guanbin Li (Sun Yat-sen University)
Liang Lin (Sun Yat-sen University)
Related Events (a corresponding poster, oral, or spotlight)
2019 Oral: Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching »
Tue Jun 11th 02:20 -- 02:25 PM Room Hall A
More from the Same Authors
2019 Workshop: Learning and Reasoning with Graph-Structured Representations »
Ethan Fetaya · Zhiting Hu · Thomas Kipf · Yujia Li · Xiaodan Liang · Renjie Liao · Raquel Urtasun · Hao Wang · Max Welling · Eric Xing · Richard Zemel