Timezone: »

Randomized Exploration in Reinforcement Learning with General Value Function Approximation
Haque Ishfaq · Qiwen Cui · Viet Nguyen · Alex Ayoub · Zhuoran Yang · Zhaoran Wang · Doina Precup · Lin Yang

Wed Jul 21 06:30 PM -- 06:35 PM (PDT) @
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class $\mathcal{F}$, our algorithm achieves a worst-case regret bound of $\tilde{O}(\mathrm{poly}(d_EH)\sqrt{T})$ where $T$ is the time elapsed, $H$ is the planning horizon and $d_E$ is the \emph{eluder dimension} of $\mathcal{F}$. In the linear setting, our algorithm reduces to LSVI-PHE, a variant of RLSVI, that enjoys an $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret. We complement the theory with an empirical evaluation across known difficult exploration tasks.

Author Information

Haque Ishfaq (MILA / McGill University)

I am a first-year Ph.D. student in the Montreal Institute for Learning Algorithms (MILA) at McGill University, where I am advised by Professor Doina Precup. Before coming to McGill, I obtained my M.S. degree in Statistics and B.S. degree in Mathematical and Computational Science from Stanford University.

Qiwen Cui (Peking University)
Viet Nguyen (McGill, Mila)
Alex Ayoub (University of Alberta)
Zhuoran Yang (Princeton University)
Zhaoran Wang (Northwestern University)
Doina Precup (McGill University / DeepMind)
Lin Yang (UCLA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors