CAB: Continuous Adaptive Blending for Policy Evaluation and Learning
Yi Su · Lequn Wang · Michele Santacatterina · Thorsten Joachims

Thu Jun 13th 09:35 -- 09:40 AM @ Room 201

The ability to perform offline A/B-testing and off-policy learning using logged contextual bandit feedback is highly desirable in a broad range of applications, including recommender systems, search engines, ad placement, and personalized health care. Both offline A/B-testing and off-policy learning require a counterfactual estimator that evaluates how some new policy would have performed, if it had been used instead of the logging policy. In this paper, we identify a family of counterfactual estimators which subsumes most such estimators proposed to date. Our analysis of this family identifies a new estimator - called Continuous Adaptive Blending (CAB) - which enjoys many advantageous theoretical and practical properties. In particular, it can be substantially less biased than clipped Inverse Propensity Score (IPS) weighting and the Direct Method, and it can have less variance than Doubly Robust and IPS estimators. In addition, it is sub-differentiable such that it can be used for learning, unlike the SWITCH estimator. Experimental results show that CAB provides excellent evaluation accuracy and outperforms other counterfactual estimators in terms of learning performance.

Author Information

Yi Su (Cornell University)
Lequn Wang (Cornell University)
Michele Santacatterina (TRIPODS Center of Data Science - Cornell University)
Thorsten Joachims (Cornell)

Related Events (a corresponding poster, oral, or spotlight)