In this paper we propose Reward Machines - a type of finite state machine that supports the specification of reward functions while exposing reward function structure to the learner and supporting decomposition. We then present Q-Learning for Reward Machines (QRM), an algorithm which appropriately decomposes the reward machine and uses off-policy q-learning to simultaneously learn subpolicies for the different components. QRM is guaranteed to converge to an optimal policy in the tabular case, in contrast to Hierarchical Reinforcement Learning methods which might converge to suboptimal policies. We demonstrate this behavior experimentally in two discrete domains. We also show how function approximation methods like neural networks can be incorporated into QRM, and that doing so can find better policies more quickly than hierarchical methods in a domain with a continuous state space.
Rodrigo A Toro Icarte (University of Toronto)
I am a PhD student in the knowledge representation group at the University of Toronto. I am also a member of the Canadian Artificial Intelligence Association and the Vector Institute. My supervisor is Sheila McIlraith. I did my undergrad in Computer Engineering and MSc in Computer Science at Pontificia Universidad Católica de Chile (PUC). My master's degree was co-supervised by Alvaro Soto and Jorge Baier. While I was at PUC, I instructed the undergraduate course "Introduction to Programming Languages."
Toryn Q Klassen (University of Toronto)
Richard Valenzano (Element AI)
Sheila McIlraith (University of Toronto)
Related Events (a corresponding poster, oral, or spotlight)
2018 Poster: Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning »
Fri Jul 13th 04:15 -- 07:00 PM Room Hall B