Skip to yearly menu bar Skip to main content


Oral

Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching

Ziliang Chen · ZHANFU YANG · Xiaoxi Wang · Xiaodan Liang · xiaopeng yan · Guanbin Li · Liang Lin

Abstract:

A broad range of cross-multi-domain generation researches boils down to matching a joint distribution by deep generative models (DGMs). Hitherto methods excel in pairwise domains whereas as the number of domains increases, remain struggling to scale themselves to fit a joint distribution. In this paper, we propose a domain-scalable DGM, \emph{i.e.}, MMI-ALI for multi-domain joint distribution matching. As an multi-domain ensemble model of ALIs \cite{dumoulin2016adversarially}, MMI-ALI is adversarially trained with maximizing \emph{Multivariate Mutual Information} (MMI) \emph{w.r.t.} joint variables of each pair of domains and their shared feature. The negative MMIs are upper bounded by a series of feasible losses that provably lead to matching multi-domain joint distributions. MMI-ALI linearly scales as the number of domains increases and may share parameters across domains and thus, strikes a right balance between efficacy and scalability. We evaluate MMI-ALI in diverse challenging multi-domain scenarios and verify the superiority of our DGM.

Chat is not available.