Timezone: »
We study off-policy evaluation (OPE) of contextual bandit policies for large discrete action spaces where conventional importance-weighting approaches suffer from excessive variance. To circumvent this variance issue, we propose a new estimator, called OffCEM, that is based on the conjunct effect model (CEM), a novel decomposition of the causal effect into a cluster effect and a residual effect. OffCEM applies importance weighting only to action clusters and addresses the residual causal effect through model-based reward estimation. We show that the proposed estimator is unbiased under a new assumption, called local correctness, which only requires that the residual-effect model preserves the relative expected reward differences of the actions within each cluster. To best leverage the CEM and local correctness, we also propose a new two-step procedure for performing model-based estimation that minimizes bias in the first step and variance in the second step. We find that the resulting OffCEM estimator substantially improves bias and variance compared to a range of conventional estimators. Experiments demonstrate that OffCEM provides substantial improvements in OPE especially in the presence of many actions.
Author Information
Yuta Saito (Cornell University)
Qingyang Ren (Cornell University)
Thorsten Joachims (Cornell)
More from the Same Authors
-
2022 : Learning from Preference Feedback in Combinatorial Action Spaces »
Thorsten Joachims -
2022 Poster: Off-Policy Evaluation for Large Action Spaces via Embeddings »
Yuta Saito · Thorsten Joachims -
2022 Poster: Improving Screening Processes via Calibrated Subset Selection »
Luke Lequn Wang · Thorsten Joachims · Manuel Gomez-Rodriguez -
2022 Spotlight: Off-Policy Evaluation for Large Action Spaces via Embeddings »
Yuta Saito · Thorsten Joachims -
2022 Spotlight: Improving Screening Processes via Calibrated Subset Selection »
Luke Lequn Wang · Thorsten Joachims · Manuel Gomez-Rodriguez -
2021 Poster: Fairness of Exposure in Stochastic Bandits »
Luke Lequn Wang · Yiwei Bai · Wen Sun · Thorsten Joachims -
2021 Spotlight: Fairness of Exposure in Stochastic Bandits »
Luke Lequn Wang · Yiwei Bai · Wen Sun · Thorsten Joachims -
2021 Poster: Optimal Off-Policy Evaluation from Multiple Logging Policies »
Nathan Kallus · Yuta Saito · Masatoshi Uehara -
2021 Spotlight: Optimal Off-Policy Evaluation from Multiple Logging Policies »
Nathan Kallus · Yuta Saito · Masatoshi Uehara -
2019 Poster: CAB: Continuous Adaptive Blending for Policy Evaluation and Learning »
Yi Su · Luke Lequn Wang · Michele Santacatterina · Thorsten Joachims -
2019 Oral: CAB: Continuous Adaptive Blending for Policy Evaluation and Learning »
Yi Su · Luke Lequn Wang · Michele Santacatterina · Thorsten Joachims