Skip to yearly menu bar Skip to main content


Policy Gradient Method For Robust Reinforcement Learning

Yue Wang · Shaofeng Zou

Hall E #1121

Keywords: [ RL: Policy Search ] [ RL: Discounted Cost/Reward ] [ RL: Function Approximation ] [ RL: Planning ] [ T: Reinforcement Learning and Planning ]

Abstract: This paper develops the first policy gradient method with global optimality guarantee and complexity analysis for robust reinforcement learning under model mismatch. Robust reinforcement learning is to learn a policy robust to model mismatch between simulator and real environment. We first develop the robust policy (sub-)gradient, which is applicable for any differentiable parametric policy class. We show that the proposed robust policy gradient method converges to the global optimum asymptotically under direct policy parameterization. We further develop a smoothed robust policy gradient method, and show that to achieve an $\epsilon$-global optimum, the complexity is $\mathcal O(\epsilon^{-3})$. We then extend our methodology to the general model-free setting, and design the robust actor-critic method with differentiable parametric policy class and value function. We further characterize its asymptotic convergence and sample complexity under the tabular setting. Finally, we provide simulation results to demonstrate the robustness of our methods.

Chat is not available.