Timezone: »

Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping
Dongruo Zhou · Jiafan He · Quanquan Gu

Wed Jul 21 05:20 PM -- 05:25 PM (PDT) @ None
Modern tasks in reinforcement learning have large state and action spaces. To deal with them efficiently, one often uses predefined feature mapping to represent states and actions in a low dimensional space. In this paper, we study reinforcement learning for discounted Markov Decision Processes (MDPs), where the transition kernel can be parameterized as a linear function of certain feature mapping. We propose a novel algorithm which makes use of the feature mapping and obtains a $\tilde O(d\sqrt{T}/(1-\gamma)^2)$ regret, where $d$ is the dimension of the feature space, $T$ is the time horizon and $\gamma$ is the discount factor of the MDP. To the best of our knowledge, this is the first polynomial regret bound without accessing a generative model or making strong assumptions such as ergodicity of the MDP. By constructing a special class of MDPs, we also show that for any algorithms, the regret is lower bounded by $\Omega(d\sqrt{T}/(1-\gamma)^{1.5})$. Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $(1-\gamma)^{-0.5}$ factor.

Author Information

Dongruo Zhou (UCLA)
Jiafan He (University of California, Los Angeles)
Quanquan Gu (University of California, Los Angeles)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors