Timezone: »

Infinite Action Contextual Bandits with Reusable Data Exhaust
Mark Rucker · Yinglun Zhu · Paul Mineiro

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #438

For infinite action contextual bandits, smoothed regret and reduction to regression results in state-of-the-art online performance with computational cost independent of the action set: unfortunately, the resulting data exhaust does not have well-defined importance-weights. This frustrates the execution of downstream data science processes such as offline model selection. In this paper we describe an online algorithm with an equivalent smoothed regret guarantee, but which generates well-defined importance weights: in exchange, the online computational cost increases, but only to order smoothness (i.e., still independent of the action set). This removes a key obstacle to adoption of smoothed regret in production scenarios.

Author Information

Mark Rucker (University of Virginia)

A strong research professional working towards a PhD at the University of Virginia with a focus on reinforcement learning, statistical estimation and human behavior modeling. Project based experience in contextual bandits, inverse reinforcement learning, MDP design, experiment design, kernel-based function approximation, Python, Scala, Spark, MATLAB, CVX, R, MongoDB, DynamoDB, JavaScript, S3, CloudFront, AWS Lambda, EC2, SQL server and C#.

Yinglun Zhu (University of California, Riverside)
Paul Mineiro (Microsoft)

More from the Same Authors