Skip to yearly menu bar Skip to main content


Poster

Tightening Exploration in Upper Confidence Reinforcement Learning

Hippolyte Bourel · Odalric-Ambrym Maillard · Mohammad Sadegh Talebi

Virtual

Keywords: [ Statistical Learning Theory ] [ Reinforcement Learning ] [ Reinforcement Learning Theory ] [ Reinforcement Learning - General ]


Abstract:

The upper confidence reinforcement learning (\UCRL) strategy introduced in \citep{jaksch2010near} is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees, this strategy and its variants have remained until now mostly theoretical as numerical experiments on simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present \UCRLnew, following the lines of \UCRL, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities, to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Further, to tighten exploration, it uses an adaptive computation of the support of each transition distributions, which in turn enables us to revisit the extended value iteration procedure to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate, through numerical experiments on standard environments, that reducing exploration this way yields a substantial numerical improvement compared to \UCRL\ and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for \UCRLnew\ improving on \UCRL, that for the first time makes appear notions of local diameter and effective support, thanks to variance-aware concentration bounds.

Chat is not available.