Oral
Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching
Ziliang Chen · ZHANFU YANG · Xiaoxi Wang · Xiaodan Liang · xiaopeng yan · Guanbin Li · Liang Lin

Tue Jun 11th 02:20 -- 02:25 PM @ Hall A

A broad range of cross-multi-domain generation researches boils down to matching a joint distribution by deep generative models (DGMs). Hitherto methods excel in pairwise domains whereas as the number of domains increases, remain struggling to scale themselves to fit a joint distribution. In this paper, we propose a domain-scalable DGM, \emph{i.e.}, MMI-ALI for multi-domain joint distribution matching. As an multi-domain ensemble model of ALIs \cite{dumoulin2016adversarially}, MMI-ALI is adversarially trained with maximizing \emph{Multivariate Mutual Information} (MMI) \emph{w.r.t.} joint variables of each pair of domains and their shared feature. The negative MMIs are upper bounded by a series of feasible losses that provably lead to matching multi-domain joint distributions. MMI-ALI linearly scales as the number of domains increases and may share parameters across domains and thus, strikes a right balance between efficacy and scalability. We evaluate MMI-ALI in diverse challenging multi-domain scenarios and verify the superiority of our DGM.

Author Information

Ziliang Chen (Sun Yat-sen University)
ZHANFU YANG (Purdue University)

I am a first-year Graduate student at Purdue University, have one paper getting accepted by the ICML 2019. My personal interest fields are Machine Learning, Artificial Intelligence Security, Distributed and Parallel Computing, Quantum Computing.

Xiaoxi Wang (Sun Yat-sen University)
Xiaodan Liang (Sun Yat-sen University)
xiaopeng yan (SYSU)
Guanbin Li (Sun Yat-sen University)
Liang Lin (Sun Yat-sen University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors