Session
Causality
Causal Identification under Markov Equivalence: Completeness Results
Amin Jaber · Jiji Zhang · Elias Bareinboim
Causal effect identification is the task of determining whether a causal distribution is computable from the combination of an observational distribution and substantive knowledge about the domain under investigation. One of the most studied versions of this problem assumes that knowledge is articulated in the form of a fully known causal diagram, which is arguably a strong assumption in many settings. In this paper, we relax this requirement and consider that the knowledge is articulated in the form of an equivalence class of causal diagrams, in particular, a partial ancestral graph (PAG). This is attractive because a PAG can be learned directly from data, and the data scientist does not need to commit to a particular, unique diagram. There are different sufficient conditions for identification in PAGs, but none is complete. We derive a complete algorithm for identification given a PAG. This implies that whenever the causal effect is identifiable, the algorithm returns a valid identification expression; alternatively, it will throw a failure condition, which means that the effect is provably not identifiable (unless stronger assumptions are made). We further provide a graphical characterization of non-identifiability of causal effects in PAGs.
Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models
Michael Oberst · David Sontag
We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy. In particular, we introduce a class of structural causal models (SCMs) for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs). We see this as a useful procedure for off-policy ``debugging'' in high-risk settings (e.g., healthcare); by decomposing the expected reward under the RL policy into specific episodes, we can identify groups where it is more likely to dramatically under- or over-perform the observed policy. This in turn can be used to facilitate review of specific episodes by domain experts, as well as to guide data collection (e.g., to characterize patient sub-types). We demonstrate the utility of this procedure in the setting of the management of sepsis.
Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models
Biwei Huang · Kun Zhang · Mingming Gong · Clark Glymour
In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments. In this paper, we study causal discovery and forecasting for nonstationary time series. By exploiting a particular type of state-space model to represent the processes, we show that nonstationarity helps to identify causal structure, and that forecasting naturally benefits from learned causal knowledge. Specifically, we allow changes in both causal strengths and noise variances in the nonlinear state-space models, which, interestingly, renders both the causal structure and model parameters identifiable. Given the causal model, we treat forecasting as a problem in Bayesian inference in the causal model, which exploits the time-varying property of the data and adapts to new observations in a principled manner. Experimental results on synthetic and real-world data sets demonstrate the efficacy of the proposed methods.
Classifying Treatment Responders Under Causal Effect Monotonicity
Nathan Kallus
In the context of individual-level causal inference, we study the problem of predicting whether someone will respond or not to a treatment based on their features and past examples of features, treatment indicator (e.g., drug/no drug), and a binary outcome (e.g., recovery from disease). As a classification task, the problem is made difficult by not knowing the example outcomes under the opposite treatment indicators. We assume the effect is monotonic, as in advertising's effect on a purchase or bail-setting's effect on reappearance in court: either it would have happened regardless of treatment, not happened regardless, or happened only depending on exposure to treatment. Predicting whether the latter is latently the case is our focus. While previous work focuses on conditional average treatment effect estimation, formulating the problem as a classification task allows us to develop new tools more suited to this problem. By leveraging monotonicity, we develop new discriminative and generative algorithms for the responder-classification problem. We explore and discuss connections to corrupted data and policy learning. We provide an empirical study with both synthetic and real datasets to compare these specialized algorithms to standard benchmarks.
Learning Models from Data with Measurement Error: Tackling Underreporting
Roy Adams · Yuelong Ji · Xiaobin Wang · Suchi Saria
Measurement error in observational datasets can lead to systematic bias in inferences based on these datasets. As studies based on observational data are increasingly used to inform decisions with real-world impact, it is critical that we develop a robust set of techniques for analyzing and adjusting for these biases. In this paper we present a method for estimating the distribution of an outcome given a binary exposure that is subject to underreporting. Our method is based on a missing data view of the measurement error problem, where the true exposure is treated as a latent variable that is marginalized out of a joint model. We prove three different conditions under which the outcome distribution can still be identified from data containing only an error-prone observations of the exposure. We demonstrate this method on synthetic data and analyze its sensitivity to near violations of the identifiability conditions. Finally, we use this method to estimate the effects of maternal smoking and opioid use during pregnancy on childhood obesity, two import problems from public health. Using the proposed method, we estimate these effects using only subject-reported drug use data and substantially refine the range of estimates generated by a sensitivity analysis-based approach. Further, the estimates produced by our method are consistent with existing literature on both the effects of maternal smoking and the rate at which subjects underreport smoking.
Adjustment Criteria for Generalizing Experimental Findings
Juan Correa · Jin Tian · Elias Bareinboim
One of the central problems across the data-driven sciences is of that generalizing experimental findings across changing conditions, for instance, whether a causal distribution obtained from a controlled experiment is valid in settings beyond the study population. While a proper design and careful execution of the experiment can support, under mild conditions, the validity of inferences about the population in which the experiment was conducted, two challenges make the extrapolation step difficult – transportability and sampling selection bias. The former poses the question of whether the domain (i.e., settings, population, environment) where the experiment is realized differs from the target domain in their distributions and causal mechanisms; the latter refers to distortions in the sample’s proportions due to preferential selection of units into the study. In this paper, we investigate the assumptions and machinery necessary for using covariate adjustment to correct for the biases generated by both of these problems, to generalize biased experimental data to infer causal effect in the target domain. We provide complete graphical conditions to determine if a set of covariates is admissible for adjustment. Building on the graphical characterization, we develop an efficient algorithm that enumerates all possible admissible sets with poly-time delay guarantee; this can be useful for when some variables are preferred over the others due to different costs or amenability to measurement.
Conditional Independence in Testing Bayesian Networks
Yujia Shen · Haiying Huang · Arthur Choi · Adnan Darwiche
Testing Bayesian Networks (TBNs) were introduced recently to represent a set of distributions, one of which is selected based on the given evidence and used for reasoning. TBNs are more expressive than classical Bayesian Networks (BNs): Marginal queries correspond to multi-linear functions in BNs and to piecewise multi-linear functions in TBNs. Moreover, marginal TBN queries are universal approximators, like neural networks. In this paper, we study conditional independence in TBNs, showing that it can be inferred from d-separation as in BNs. We also study the role of TBN expressiveness and independence in dealing with the problem of learning using incomplete models (i.e., ones that are missing nodes or edges from the data-generating model). Finally, we illustrate our results on a number of concrete examples, including a case study on (high order) Hidden Markov Models.
Sensitivity Analysis of Linear Structural Causal Models
Carlos Cinelli · Daniel Kumor · Bryant Chen · Judea Pearl · Elias Bareinboim
Causal inference requires assumptions about the data generating process, many of which are unverifiable from the data. Given that some causal assumptions might be uncertain or disputed, formal methods are needed to quantify how sensitive research conclusions are to violations of those assumptions. Although an extensive literature exists on the topic, most results are limited to specific model structures, while a general-purpose algorithmic framework for sensitivity analysis is still lacking. In this paper, we develop a formal, systematic approach to sensitivity analysis for arbitrary linear Structural Causal Models (SCMs). We start by formalizing sensitivity analysis as a constrained identification problem. We then develop an efficient, graph-based identification algorithm that exploits non-zero constraints on both directed and bidirected edges. This allows researchers to systematically derive sensitivity curves for a target causal quantity with an arbitrary set of path coefficients and error covariances as sensitivity parameters. These results can be used to display the degree to which violations of causal assumptions affect the target quantity of interest, and to judge, on scientific grounds, whether problematic degrees of violations are plausible.
More Efficient Off-Policy Evaluation through Regularized Targeted Learning
Aurelien Bibaut · Ivana Malenica · Nikos Vlassis · Mark van der Laan
We study the problem of off-policy evaluation (OPE) in Reinforcement Learning (RL), where the aim is to estimate the performance of a new policy given historical data that may have been generated by a different policy, or policies. In particular, we introduce a novel doubly-robust estimator for the OPE problem in RL, based on the Targeted Maximum Likelihood Estimation principle from the statistical causal inference literature. We also introduce several variance reduction techniques that lead to impressive performance gains in off-policy evaluation. We show empirically that our estimator uniformly wins over existing off-policy evaluation methods across multiple RL environments and various levels of model misspecification. Finally, we further the existing theoretical analysis of estimators for the RL off-policy estimation problem by showing their $O_P(1/\sqrt{n})$ rate of convergence and characterizing their asymptotic distribution.
Inferring Heterogeneous Causal Effects in Presence of Spatial Confounding
Muhammad Osama · Dave Zachariah · Thomas Schön
We address the problem of inferring the causal effect of an exposure on an outcome across space, using observational data. The data is possibly subject to unmeasured confounding variables which, in a standard approach, must be adjusted for by estimating a nuisance function. Here we develop a method that eliminates the nuisance function, while mitigating the resulting errors-in-variables. The result is a robust and accurate inference method for spatially varying heterogeneous causal effects. The properties of the method are demonstrated on synthetic as well as real data from Germany and the US.