Skip to yearly menu bar Skip to main content


Poster
in
Workshop: “Could it have been different?” Counterfactuals in Minds and Machines

Counterfactual Generation with Identifiability Guarantees

Hanqi Yan · Lingjing Kong · Lin Gui · Yuejie Chi · Eric Xing · Yulan He · Kun Zhang


Abstract:

Counterfactual generation requires the identification of the disentangled latent representations, such as content and style, that underlie the observed data. Existing unsupervised methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. This problem is exacerbated when data are sampled from multiple domains, as required by prior work, since the dependence between content and style may vary significantly over domains. In this work, we tackle the dependence between the content and the style variables inherent in the counterfactual generation task. We show identification guarantees by leveraging the relative sparsity of the influences from different latent variables. Our theoretical insights enable the development of a doMain AdapTive conTrollable text gEneration model, called MATTE. It achieves state-of-art performance in unsupervised controllable text generation tasks on large-scale datasets.

Chat is not available.