Timezone: »
Poster
Dueling Bandits with Weak Regret
Bangrui Chen · Peter I Frazier
We consider online content recommendation with implicit feedback through pairwise comparisons, formalized as the so-called dueling bandit problem. We study the dueling bandit problem in the Condorcet winner setting, and consider two notions of regret: the more well-studied strong regret, which is 0 only when both arms pulled are the Condorcet winner; and the less well-studied weak regret, which is 0 if either arm pulled is the Condorcet winner. We propose a new algorithm for this problem, Winner Stays (WS), with variations for each kind of regret:
WS for weak regret (WS-W) has expected cumulative weak regret that is $O(N^2)$, and $O(N\log(N))$ if arms have a total order; WS for strong regret (WS-S) has expected cumulative strong regret of $O(N^2 + N \log(T))$, and $O(N\log(N)+N\log(T))$ if arms have a total order.
WS-W is the first dueling bandit algorithm with weak regret that is constant in time.
WS is simple to compute, even for problems with many arms, and we demonstrate through numerical experiments on simulated and real data that WS has significantly smaller regret than existing algorithms in both the weak- and strong-regret settings.
Author Information
Bangrui Chen (Cornell University)
Peter I Frazier (Cornell University)
Peter Frazier is an Associate Professor in the School of Operations Research and Information Engineering at Cornell University. He is also a Staff Data Scientist at Uber, where he managed the data science group for UberPOOL while on sabbatical leave from Cornell. He completed his Ph.D. in Operations Research and Financial Engineering at Princeton University in 2009. Peter's research is in Bayesian optimization, multi-armed bandits and incentive design for social learning, with applications in e-commerce, the sharing economy, and materials design. He is the recipient of an AFOSR Young Investigator Award and an NSF CAREER Award.
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Dueling Bandits with Weak Regret »
Mon Aug 7th 05:30 -- 05:48 AM Room C4.1
More from the Same Authors
-
2019 Poster: Bayesian Optimization of Composite Functions »
Raul Astudillo · Peter I Frazier -
2019 Oral: Bayesian Optimization of Composite Functions »
Raul Astudillo · Peter I Frazier