Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Spurious correlations, Invariance, and Stability (SCIS)

Representation Learning as Finding Necessary and Sufficient Causes

Yixin Wang · Michael Jordan

Keywords: [ non-spuriousness ] [ probabilities of causation ] [ disentanglement ] [ Causal Inference ] [ Representation Learning ]


Abstract:

Representation learning constructs low-dimensional representations tosummarize essential features of high-dimensional data. This learningproblem is often approached by describing various desiderataassociated with learned representations; e.g., that they benon-spurious or efficient. It can be challenging, however, to turnthese intuitive desiderata into formal criteria that can be measuredand enhanced based on observed data. In this paper, we take a causalperspective on representation learning, formalizing non-spuriousnessand efficiency (in supervised representation learning) usingcounterfactual quantities and observable consequences of causalassertions. This yields computable metrics that can be used to assessthe degree to which representations satisfy the desiderata of interestand learn non-spurious representations from single observationaldatasets.

Chat is not available.