Skip to yearly menu bar Skip to main content


Poster

An Information-Theoretic Analysis of Nonstationary Bandit Learning

Seungki Min · Daniel Russo

Exhibit Hall 1 #716
[ ]
[ PDF [ Poster

Abstract:

In nonstationary bandit learning problems, the decision-maker must continually gather information and adapt their action selection as the latent state of the environment evolves. In each time period, some latent optimal action maximizes expected reward under the environment state. We view the optimal action sequence as a stochastic process, and take an information-theoretic approach to analyze attainable performance. We bound per-period regret in terms of the entropy rate of the optimal action process. The bound applies to a wide array of problems studied in the literature and reflects the problem's information structure through its information-ratio.

Chat is not available.