Timezone: »
Specifying a Reinforcement Learning (RL) task involves choosing a suitable planning horizon, which is typically modeled by a discount factor. It is known that applying RL algorithms with a lower discount factor can act as a regularizer, improving performance in the limited data regime. Yet the exact nature of this regularizer has not been investigated. In this work, we fill in this gap. For several Temporal-Difference (TD) learning methods, we show an explicit equivalence between using a reduced discount factor and adding an explicit regularization term to the algorithm's loss. Motivated by the equivalence, we empirically study this technique compared to standard L2 regularization by extensive experiments in discrete and continuous domains, using tabular and functional representations. Our experiments suggest the regularization effectiveness is strongly related to properties of the available data, such as size, distribution, and mixing rate.
Author Information
Ron Amit (Technion – Israel Institute of Technology)
Ron Meir (Technion Israeli Institute of Technology)
Kamil Ciosek (Microsoft)
More from the Same Authors
-
2021 Poster: Ensemble Bootstrapping for Q-Learning »
Oren Peer · Chen Tessler · Nadav Merlis · Ron Meir -
2021 Spotlight: Ensemble Bootstrapping for Q-Learning »
Oren Peer · Chen Tessler · Nadav Merlis · Ron Meir -
2020 Poster: Option Discovery in the Absence of Rewards with Manifold Analysis »
Amitay Bar · Ronen Talmon · Ron Meir -
2019 Poster: Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN »
dror freirich · Tzahi Shimkin · Ron Meir · Aviv Tamar -
2019 Oral: Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN »
dror freirich · Tzahi Shimkin · Ron Meir · Aviv Tamar -
2018 Poster: Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory »
Ron Amit · Ron Meir -
2018 Oral: Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory »
Ron Amit · Ron Meir