Causal inference provides a set of tools and principles that allows one to combine data and causal invariances about the environment to reason with questions of counterfactual nature -- i.e., what would have happened had reality been different, even when no data about this unrealized reality is available. Reinforcement Learning is concerned with efficiently finding a policy that optimizes a specific function (e.g., reward, regret) in interactive and uncertain environments. These two disciplines have evolved independently and with virtually no interaction between them. In fact, they operate over different aspects of the same building block, i.e., counterfactual relations, which makes them umbilically tied.
In this tutorial, we introduce a unified treatment putting these two disciplines under the same conceptual and theoretical umbrella. We show that a number of natural and pervasive classes of learning problems emerge when this connection is fully established, which cannot be seen individually from either discipline. In particular, we'll discuss generalized policy learning (a combination of online, off-policy, and do-calculus learning), where and where to intervene, counterfactual decision-making (and free-will, autonomy, Human-AI collaboration), police generalizability, causal imitation learning, among others. This new understanding leads to a broader view of what counterfactual learning is and suggests the great potential for the study of causality and reinforcement learning side by side, which we name causal reinforcement learning (CRL, for short).