Spotlight

Model-Free and Model-Based Policy Evaluation when Causality is Uncertain

David Bruns-Smith

[ Abstract ] [ Livestream: Visit Learning Theory 1 ] [ Paper ]
Wed 21 Jul 5:30 a.m. — 5:35 a.m. PDT

When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These ``confounders'' will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.

Chat is not available.