Timezone: »
Existing theoretical studies on offline reinforcement learning (RL) mostly consider a dataset sampled directly from the target task. In practice, however, data often come from several heterogeneous but related sources. Motivated by this gap, this work aims at rigorously understanding offline RL with multiple datasets that are collected from randomly perturbed versions of the target task instead of from itself. An information-theoretic lower bound is derived, which reveals a necessary requirement on the number of involved sources in addition to that on the number of data samples. Then, a novel HetPEVI algorithm is proposed, which simultaneously considers the sample uncertainties from a finite number of data samples per data source and the source uncertainties due to a finite number of available data sources. Theoretical analyses demonstrate that HetPEVI can solve the target task as long as the data sources collectively provide a good data coverage. Moreover, HetPEVI is demonstrated to be optimal up to a polynomial factor of the horizon length. Finally, the study is extended to offline Markov games and offline robust RL, which demonstrates the generality of the proposed designs and theoretical analyses.
Author Information
Chengshuai Shi (University of Virginia)
Wei Xiong (The Hong Kong University of Science and Technology)
Cong Shen (University of Virginia)
Jing Yang (Penn State University)
More from the Same Authors
-
2023 Poster: Near-optimal Conservative Exploration in Reinforcement Learning under Episode-wise Constraints »
Donghao Li · Ruiquan Huang · Cong Shen · Jing Yang -
2023 Poster: Federated Linear Contextual Bandits with User-level Differential Privacy »
Ruiquan Huang · Huanyu Zhang · Luca Melis · Milan Shen · Meisam Hejazinia · Jing Yang -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes »
Chenlu Ye · Wei Xiong · Quanquan Gu · Tong Zhang -
2022 Poster: A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games »
Wei Xiong · Han Zhong · Chengshuai Shi · Cong Shen · Tong Zhang -
2022 Poster: Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets »
Han Zhong · Wei Xiong · Jiyuan Tan · Liwei Wang · Tong Zhang · Zhaoran Wang · Zhuoran Yang -
2022 Spotlight: Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets »
Han Zhong · Wei Xiong · Jiyuan Tan · Liwei Wang · Tong Zhang · Zhaoran Wang · Zhuoran Yang -
2022 Spotlight: A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games »
Wei Xiong · Han Zhong · Chengshuai Shi · Cong Shen · Tong Zhang