Timezone: »
Reinforcement-learning agents seek to maximize a reward signal through environmental interactions. As humans, our job in the learning process is to express which behaviors are preferable through designing reward functions. In this work, we consider the reward-design problem in tasks formulated as reaching desirable states and avoiding undesirable states. To start, we propose a strict partial ordering of the policy space. We prefer policies that reach the good states faster and with higher probability while avoiding the bad states longer. Then, we propose an environment-independent tiered reward structure and show it is guaranteed to induce policies that are Pareto-optimal according to our preference relation.
Author Information
Zhiyuan Zhou (Brown University)
Henry Sowerby (Brown University)
Michael L. Littman (Brown University)
More from the Same Authors
-
2021 : Bad-Policy Density: A Measure of Reinforcement-Learning Hardness »
David Abel · Cameron Allen · Dilip Arumugam · D Ellis Hershkowitz · Michael L. Littman · Lawson Wong -
2021 : Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback »
Ishaan Shah · David Halpern · Michael L. Littman · Kavosh Asadi -
2023 Poster: Meta-learning Parameterized Skills »
Haotian Fu · Shangqun Yu · Saket Tiwari · Michael L. Littman · George Konidaris -
2021 : Bad-Policy Density: A Measure of Reinforcement-Learning Hardness »
David Abel · Cameron Allen · Dilip Arumugam · D Ellis Hershkowitz · Michael L. Littman · Lawson Wong -
2019 Poster: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2019 Oral: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2018 Poster: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Oral: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Poster: Policy and Value Transfer in Lifelong Reinforcement Learning »
David Abel · Yuu Jinnai · Sophie Guo · George Konidaris · Michael L. Littman -
2018 Oral: Policy and Value Transfer in Lifelong Reinforcement Learning »
David Abel · Yuu Jinnai · Sophie Guo · George Konidaris · Michael L. Littman -
2018 Poster: Lipschitz Continuity in Model-based Reinforcement Learning »
Kavosh Asadi · Dipendra Misra · Michael L. Littman -
2018 Oral: Lipschitz Continuity in Model-based Reinforcement Learning »
Kavosh Asadi · Dipendra Misra · Michael L. Littman -
2017 Poster: An Alternative Softmax Operator for Reinforcement Learning »
Kavosh Asadi · Michael L. Littman -
2017 Poster: Interactive Learning from Policy-Dependent Human Feedback »
James MacGlashan · Mark Ho · Robert Loftin · Bei Peng · Guan Wang · David L Roberts · Matthew E. Taylor · Michael L. Littman -
2017 Talk: Interactive Learning from Policy-Dependent Human Feedback »
James MacGlashan · Mark Ho · Robert Loftin · Bei Peng · Guan Wang · David L Roberts · Matthew E. Taylor · Michael L. Littman -
2017 Talk: An Alternative Softmax Operator for Reinforcement Learning »
Kavosh Asadi · Michael L. Littman