Timezone: »
This paper proposes a Disentangled gEnerative cAusal Representation (DEAR) learning method under appropriate supervised information. Unlike existing disentanglement methods that enforce independence of the latent variables, we consider the general case where the underlying factors of interests can be causally related. We show that previous methods with independent priors fail to disentangle causally related factors even under supervision. Motivated by this finding, we propose a new disentangled learning method called DEAR that enables causal controllable generation and causal representation learning. The key ingredient of this new formulation is to use a structural causal model (SCM) as the prior distribution for a bidirectional generative model. The prior is then trained jointly with a generator and an encoder using a suitable GAN algorithm incorporated with supervised information on the ground-truth factors and their underlying causal structure. We provide theoretical justification on the identifiability and asymptotic convergence of the proposed method. We conduct extensive experiments on both synthesized and real data sets to demonstrate the effectiveness of DEAR in causal controllable generation, and the benefits of the learned representations for downstream tasks in terms of sample efficiency and distributional robustness.
Author Information
Xinwei Shen
Furui Liu (Zhejiang Lab)
Hanze Dong (HKUST)
Qing Lian
Zhitang Chen (Huawei Noah’s Ark Lab)
Tong Zhang (HKUST)
More from the Same Authors
-
2023 Poster: Uncertainty Estimation by Fisher Information-based Evidential Deep Learning »
Danruo Deng · Guangyong Chen · Yang YU · Furui Liu · Pheng Ann Heng -
2022 Poster: Local Augmentation for Graph Neural Networks »
Songtao Liu · Rex (Zhitao) Ying · Hanze Dong · Lanqing Li · Tingyang Xu · Yu Rong · Peilin Zhao · Junzhou Huang · Dinghao Wu -
2022 Spotlight: Local Augmentation for Graph Neural Networks »
Songtao Liu · Rex (Zhitao) Ying · Hanze Dong · Lanqing Li · Tingyang Xu · Yu Rong · Peilin Zhao · Junzhou Huang · Dinghao Wu -
2018 Poster: Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning »
Thomas Dietterich · George Trimponias · Zhitang Chen -
2018 Oral: Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning »
Thomas Dietterich · George Trimponias · Zhitang Chen