Skip to yearly menu bar Skip to main content


Poster

A Fixed-Point Approach for Causal Generative Modeling

Meyer Scetbon · Joel Jennings · Agrin Hilmkil · Cheng Zhang · Chao Ma


Abstract:

Modeling true world data-generating processes lies at the heart of empirical science. Structural Causal Models (SCMs) and their associated Directed Acyclic Graphs (DAGs) provide an increasingly popular answer to such problems by defining the causal generative process that transforms random noise into observations.However, learning the SCM and its causal structure from observational data poses an ill-posed and NP-hard inverse problem.To circumvent these, we propose a new and equivalent formalism to describe SCMs, viewed as fixed-point problems on the causally ordered variables, and we show two cases where fixed-point SCMs can be uniquely recovered from observations given the topological ordering (TO).Based on this, we design a two-stage causal generative model that first infer in a zero-shot manner the causal ordering from observations, and uses the predicted order to learn the generating fixed-point SCM. To infer TOs, we propose to amortize the TO inference task from observations on generated datasets by sequentially predicting the leaves of the graphs seen during training. To learn fixed-point SCMs, we design a transformer-based architecture that exploits a new attention mechanism enabling the modeling of causal structures, and show that our parameterization is consistent with the definition of fixed-point SCMs. Finally, we conduct an extensive evaluation of each method individually, and show that when combined, our model outperforms various baselines on generated out-of-distribution problems.

Live content is unavailable. Log in and register to view live content