Timezone: »
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This paper studies the use of TD(0), a canonical TD algorithm, to estimate the value function of a given policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the given policy. To address this limitation, we introduce \textit{policy sampling error corrected}-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) is a more data efficient estimator than TD(0) for a fixed batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on three batch value function learning tasks, with a hyperparameter sensitivity analysis, and show that PSEC-TD(0) produces value function estimates with lower mean squared error than TD(0).
Author Information
Brahma Pavse (University of Texas at Austin)
Ishan Durugkar (University of Texas at Austin)
Josiah Hanna (University of Edinburgh)
Peter Stone (University of Texas at Austin)
More from the Same Authors
-
2022 : Model-Based Meta Automatic Curriculum Learning »
Zifan Xu · Yulin Zhang · Shahaf Shperberg · Reuth Mirsky · Yuqian Jiang · Bo Liu · Peter Stone -
2022 : Task Factorization in Curriculum Learning »
Reuth Mirsky · Shahaf Shperberg · Yulin Zhang · Zifan Xu · Yuqian Jiang · Jiaxun Cui · Peter Stone -
2022 : Q/A: Invited Speaker: Peter Stone »
Peter Stone -
2022 : Invited Speaker: Peter Stone »
Peter Stone -
2022 Poster: Causal Dynamics Learning for Task-Independent State Abstraction »
Zizhao Wang · Xuesu Xiao · Zifan Xu · Yuke Zhu · Peter Stone -
2022 Oral: Causal Dynamics Learning for Task-Independent State Abstraction »
Zizhao Wang · Xuesu Xiao · Zifan Xu · Yuke Zhu · Peter Stone -
2021 Poster: Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition »
Bo Liu · Qiang Liu · Peter Stone · Animesh Garg · Yuke Zhu · Anima Anandkumar -
2021 Oral: Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition »
Bo Liu · Qiang Liu · Peter Stone · Animesh Garg · Yuke Zhu · Anima Anandkumar -
2019 : Peter Stone: Learning Curricula for Transfer Learning in RL »
Peter Stone -
2019 : panel discussion with Craig Boutilier (Google Research), Emma Brunskill (Stanford), Chelsea Finn (Google Brain, Stanford, UC Berkeley), Mohammad Ghavamzadeh (Facebook AI), John Langford (Microsoft Research) and David Silver (Deepmind) »
Peter Stone · Craig Boutilier · Emma Brunskill · Chelsea Finn · John Langford · David Silver · Mohammad Ghavamzadeh -
2019 : Invited Talk 1: Adaptive Tolling for Multiagent Traffic Optimization »
Peter Stone -
2019 Poster: Importance Sampling Policy Evaluation with an Estimated Behavior Policy »
Josiah Hanna · Scott Niekum · Peter Stone -
2019 Oral: Importance Sampling Policy Evaluation with an Estimated Behavior Policy »
Josiah Hanna · Scott Niekum · Peter Stone -
2017 Poster: Data-Efficient Policy Evaluation Through Behavior Policy Search »
Josiah Hanna · Philip S. Thomas · Peter Stone · Scott Niekum -
2017 Talk: Data-Efficient Policy Evaluation Through Behavior Policy Search »
Josiah Hanna · Philip S. Thomas · Peter Stone · Scott Niekum