Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Online Learning with Feedback Graphs: The True Shape of Regret

Tomáš Kocák · Alexandra Carpentier

Exhibit Hall 1 #625
[ ]
[ PDF [ Poster

Abstract: Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by Mannor & Shamir (2011) and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order αT, where α is the independence number of the graph, and T is the time horizon. However, this is proven only when the number of rounds T is larger than α3, which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity R, called the *problem complexity*, and prove that the minimax regret is proportional to R for any graph and time horizon T. Introducing an intricate exploration strategy, we define the Exp3-EX algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if T is smaller than α3.

Chat is not available.