Poster
Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li

Thu Jul 12th 06:15 -- 09:00 PM @ Hall B #124

Regret bounds in online learning compare the player's performance to $L$, the optimal performance in hindsight with a fixed strategy. Typically such bounds scale with the square root of the time horizon $T$. The more refined concept of first-order regret bound replaces this with a scaling $\sqrt{L}$, which may be much smaller than $\sqrt{T}$. It is well known that minor variants of standard algorithms satisfy first-order regret bounds in the full information and multi-armed bandit settings. In a COLT 2017 open problem, Agarwal, Krishnamurthy, Langford, Luo, and Schapire raised the issue that existing techniques do not seem sufficient to obtain first-order regret bounds for the contextual bandit problem. In the present paper, we resolve this open problem by presenting a new strategy based on augmenting the policy space.

Author Information

Zeyuan Allen-Zhu (Microsoft Research AI)
Sebastien Bubeck (Microsoft Research)
Yuanzhi Li (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors