Timezone: »

 
Spotlight
Estimating Identifiable Causal Effects on Markov Equivalence Class through Double Machine Learning
Yonghan Jung · Jin Tian · Elias Bareinboim

Thu Jul 22 05:30 PM -- 05:35 PM (PDT) @

General methods have been developed for estimating causal effects from observational data under causal assumptions encoded in the form of a causal graph. Most of this literature assumes that the underlying causal graph is completely specified. However, only observational data is available in most practical settings, which means that one can learn at most a Markov equivalence class (MEC) of the underlying causal graph. In this paper, we study the problem of causal estimation from a MEC represented by a partial ancestral graph (PAG), which is learnable from observational data. We develop a general estimator for any identifiable causal effects in a PAG. The result fills a gap for an end-to-end solution to causal inference from observational data to effects estimation. Specifically, we develop a complete identification algorithm that derives an influence function for any identifiable causal effects from PAGs. We then construct a double/debiased machine learning (DML) estimator that is robust to model misspecification and biases in nuisance function estimation, permitting the use of modern machine learning techniques. Simulation results corroborate with the theory.

Author Information

Yonghan Jung (Purdue University)
Jin Tian (Iowa State University)
Elias Bareinboim (Columbia)
Elias Bareinboim

Elias Bareinboim is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence (CausalAI) Laboratory at Columbia University. His research focuses on causal and counterfactual inference and their applications to artificial intelligence and machine learning as well as data-driven fields in the health and social sciences. His work was the first to propose a general solution to the problem of "causal data-fusion," providing practical methods for combining datasets generated under different experimental conditions and plagued with various biases. In the last years, Bareinboim has been exploring the intersection of causal inference with decision-making (including reinforcement learning) and explainability (including fairness analysis). Before joining Columbia, he was an assistant professor at Purdue University and received his Ph.D. in Computer Science from the University of California, Los Angeles. Bareinboim was named one of ``AI's 10 to Watch'' by IEEE, and is a recipient of an NSF CAREER Award, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2019 UAI Best Paper Award.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors