Poster
Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching
Ziliang Chen · ZHANFU YANG · Xiaoxi Wang · Xiaodan Liang · xiaopeng yan · Guanbin Li · Liang Lin

Tue Jun 11th 06:30 -- 09:00 PM @ Pacific Ballroom #12

A broad range of cross-$m$-domain generation researches boil down to matching a joint distribution by deep generative models (DGMs). Hitherto algorithms excel in pairwise domains while as $m$ increases, remain struggling to scale themselves to fit a joint distribution. In this paper, we propose a domain-scalable DGM, i.e., MMI-ALI for $m$-domain joint distribution matching. As an $m$-domain ensemble model of ALIs (Dumoulin et al., 2016), MMI-ALI is adversarially trained with maximizing Multivariate Mutual Information (MMI) w.r.t. joint variables of each pair of domains and their shared feature. The negative MMIs are upper bounded by a series of feasible losses provably leading to matching $m$-domain joint distributions. MMI-ALI linearly scales as $m$ increases and thus, strikes a right balance between efficacy and scalability. We evaluate MMI-ALI in diverse challenging $m$-domain scenarios and verify its superiority.

Author Information

Ziliang Chen (Sun Yat-sen University)
ZHANFU YANG (Purdue University)

I am a first-year Graduate student at Purdue University, have one paper getting accepted by the ICML 2019. My personal interest fields are Machine Learning, Artificial Intelligence Security, Distributed and Parallel Computing, Quantum Computing.

Xiaoxi Wang (Sun Yat-sen University)
Xiaodan Liang (Sun Yat-sen University)
xiaopeng yan (SYSU)
Guanbin Li (Sun Yat-sen University)
Liang Lin (Sun Yat-sen University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors