Timezone: »

 
Oral
Sensitivity Analysis of Linear Structural Causal Models
Carlos Cinelli · Daniel Kumor · Bryant Chen · Judea Pearl · Elias Bareinboim

Wed Jun 12 03:05 PM -- 03:10 PM (PDT) @ Grand Ballroom

Causal inference requires assumptions about the data generating process, many of which are unverifiable from the data. Given that some causal assumptions might be uncertain or disputed, formal methods are needed to quantify how sensitive research conclusions are to violations of those assumptions. Although an extensive literature exists on the topic, most results are limited to specific model structures, while a general-purpose algorithmic framework for sensitivity analysis is still lacking. In this paper, we develop a formal, systematic approach to sensitivity analysis for arbitrary linear Structural Causal Models (SCMs). We start by formalizing sensitivity analysis as a constrained identification problem. We then develop an efficient, graph-based identification algorithm that exploits non-zero constraints on both directed and bidirected edges. This allows researchers to systematically derive sensitivity curves for a target causal quantity with an arbitrary set of path coefficients and error covariances as sensitivity parameters. These results can be used to display the degree to which violations of causal assumptions affect the target quantity of interest, and to judge, on scientific grounds, whether problematic degrees of violations are plausible.

Author Information

Carlos Cinelli (UCLA)
Daniel Kumor (Purdue University)
Bryant Chen (Brex)
Judea Pearl (UCLA)
Elias Bareinboim (Purdue)
Elias Bareinboim

Elias Bareinboim is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence (CausalAI) Laboratory at Columbia University. His research focuses on causal and counterfactual inference and their applications to artificial intelligence and machine learning as well as data-driven fields in the health and social sciences. His work was the first to propose a general solution to the problem of "causal data-fusion," providing practical methods for combining datasets generated under different experimental conditions and plagued with various biases. In the last years, Bareinboim has been exploring the intersection of causal inference with decision-making (including reinforcement learning) and explainability (including fairness analysis). Before joining Columbia, he was an assistant professor at Purdue University and received his Ph.D. in Computer Science from the University of California, Los Angeles. Bareinboim was named one of ``AI's 10 to Watch'' by IEEE, and is a recipient of an NSF CAREER Award, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2019 UAI Best Paper Award.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors