Timezone: »

Tightening Exploration in Upper Confidence Reinforcement Learning
Hippolyte Bourel · Odalric-Ambrym Maillard · Mohammad Sadegh Talebi

Tue Jul 14 09:00 AM -- 09:45 AM & Tue Jul 14 10:00 PM -- 10:45 PM (PDT) @ Virtual #None

The upper confidence reinforcement learning (\UCRL) strategy introduced in \citep{jaksch2010near} is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees, this strategy and its variants have remained until now mostly theoretical as numerical experiments on simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present \UCRLnew, following the lines of \UCRL, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities, to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Further, to tighten exploration, it uses an adaptive computation of the support of each transition distributions, which in turn enables us to revisit the extended value iteration procedure to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate, through numerical experiments on standard environments, that reducing exploration this way yields a substantial numerical improvement compared to \UCRL\ and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for \UCRLnew\ improving on \UCRL, that for the first time makes appear notions of local diameter and effective support, thanks to variance-aware concentration bounds.

Author Information

Hippolyte Bourel (ENS Rennes)
Odalric-Ambrym Maillard (Inria Lille - Nord Europe)
Mohammad Sadegh Talebi (University of Copenhagen)

More from the Same Authors