Timezone: »

Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG

Wed Jul 21 06:30 PM -- 06:35 PM (PDT) @
Designing off-policy reinforcement learning algorithms is typically a very challenging task, because a desirable iteration update often involves an expectation over an on-policy distribution. Prior off-policy actor-critic (AC) algorithms have introduced a new critic that uses the density ratio for adjusting the distribution mismatch in order to stabilize the convergence, but at the cost of potentially introducing high biases due to the estimation errors of both the density ratio and value function. In this paper, we develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP, which can take advantage of learned nuisance functions to reduce estimation errors. Moreover, DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize, and is thus more sample efficient than prior algorithms that adopt either two timescale or nested-loop structure. We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $\epsilon$-accurate optimal policy. We also show that the overall convergence of DR-Off-PAC is doubly robust to the approximation errors that depend only on the expressive power of approximation functions. To the best of our knowledge, our study establishes the first overall sample complexity analysis for single time-scale off-policy AC algorithm.

Author Information

Tengyu Xu (The Ohio State University)
Zhuoran Yang (Princeton University)
Zhaoran Wang (Northwestern University)
Yingbin LIANG (The Ohio State University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors