Spotlight
Fast active learning for pure exploration in reinforcement learning
Pierre Menard · Omar Darwiche Domingues · Anders Jonsson · Emilie Kaufmann · Edouard Leurent · Michal Valko
Abstract:
Realistic environments often provide agents with very limited feedback.
When the environment is initially unknown, the feedback, in the beginning, can be completely absent,
and the agents may first choose to devote all their effort on \emph{exploring efficiently.}
The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels
of generality on one side, and a few theoretically-backed exploration strategies on the other.
Many of them are incarnated by \emph{intrinsic motivation} and in particular \emph{explorations bonuses}.
A common choice is to use $1/\sqrt{n}$ bonus,
where $n$ is a number of times this particular state-action pair was visited.
We show that, surprisingly, for a pure-exploration objective of \emph{reward-free exploration}, bonuses that scale with $1/n$ bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$.
Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity
in the \emph{best-policy identification} setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the
exploration phase.
Chat is not available.
Successful Page Load