Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Diffusion Based Causal Representation Learning

Amir Mohammad Karimi Mamaghan · Francesco Quinzan · Andrea Dittadi · Stefan Bauer

Keywords: [ diffusion-based models ] [ weak supervision ] [ Causal Representation Learning ]


Abstract:

Causal reasoning can be considered a cornerstone of intelligent systems. Having access to an underlying causal graph comes with the promise of cause-effect estimation and the identification of efficient and safe interventions. However, depending on the application and the complexity of the system one causal graph might be insufficient and even the variables of interest and levels of abstractions might change. This is incompatible with currently deployed generative models including popular VAE approaches which provide only representations from a point estimate. In this work, we study recently introduced diffusion-based representations which offer access to infinite dimensional latent codes which encode different levels of information in the latent code. In a first proof of principle, we investigate the use of a single point of these infinite dimensional codes for causal representation learning and demonstrate experimentally that this approach performs comparably well in identifying the causal structure and causal variables.

Chat is not available.