Timezone: »

 
Poster
Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation
Asuman Ozdaglar · Sarath Pattathil · Jiawei Zhang · Kaiqing Zhang

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #221

Offline reinforcement learning (RL) aims to find an optimal policy for sequential decision-making using a pre-collected dataset, without further interaction with the environment. Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators, especially to handle the case with excessively large state-action spaces. Among them, the framework based on the linear-programming (LP) reformulation of Markov decision processes has shown promise: it enables sample-efficient offline RL with function approximation, under only partial data coverage and realizability assumptions on the function classes, with favorable computational tractability. In this work, we revisit the LP framework for offline RL, and provide a new reformulation that advances the existing results in several aspects, relaxing certain assumptions and achieving optimal statistical rates in terms of sample size. Our key enabler is to introduce proper constraints in the reformulation, instead of using any regularization as in the literature, also with careful choices of the function classes and initial state distributions. We hope our insights bring into light the use of LP formulations and the induced primal-dual minimax optimization, in offline RL.

Author Information

Asuman Ozdaglar (MIT)
Sarath Pattathil (Massachusetts Institute of Technology)
Jiawei Zhang (Massachusetts Institute of Technology)
Kaiqing Zhang (University of Maryland, College Park)

More from the Same Authors