Timezone: »
We consider the problem of how best to use prior experience to bootstrap lifelong learning, where an agent faces a series of task instances drawn from some task distribution. First, we identify the initial policy that optimizes expected performance over the distribution of tasks for increasingly complex classes of policy and task distributions. We empirically demonstrate the relative performance of each policy class’ optimal element in a variety of simple task distributions. We then consider value-function initialization methods that preserve PAC guarantees while simultaneously minimizing the learning required in two learning algorithms, yielding MaxQInit, a practical new method for value-function-based transfer. We show that MaxQInit performs well in simple lifelong RL experiments.
Author Information
David Abel (Brown University)
Yuu Jinnai (Brown University)
Sophie Guo (Guo)
George Konidaris (Brown)
Michael L. Littman (Brown University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Policy and Value Transfer in Lifelong Reinforcement Learning »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #173
More from the Same Authors
-
2021 : Bad-Policy Density: A Measure of Reinforcement-Learning Hardness »
David Abel · Cameron Allen · Dilip Arumugam · D Ellis Hershkowitz · Michael L. Littman · Lawson Wong -
2021 : Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback »
Ishaan Shah · David Halpern · Michael L. Littman · Kavosh Asadi -
2023 : Specifying Behavior Preference with Tiered Reward Functions »
Zhiyuan Zhou · Henry Sowerby · Michael L. Littman -
2023 : Guided Policy Search for Parameterized Skills using Adverbs »
Benjamin Spiegel · George Konidaris -
2023 Poster: Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning »
Sam Lobel · Akhil Bagaria · George Konidaris -
2023 Oral: Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning »
Sam Lobel · Akhil Bagaria · George Konidaris -
2023 Poster: Meta-learning Parameterized Skills »
Haotian Fu · Shangqun Yu · Saket Tiwari · Michael L. Littman · George Konidaris -
2023 Poster: RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents »
Rafael A Rodriguez-Sanchez · Benjamin Spiegel · Jennifer Wang · Roma Patel · Stefanie Tellex · George Konidaris -
2021 : Bad-Policy Density: A Measure of Reinforcement-Learning Hardness »
David Abel · Cameron Allen · Dilip Arumugam · D Ellis Hershkowitz · Michael L. Littman · Lawson Wong -
2021 : RL + Robotics Panel »
George Konidaris · Jan Peters · Martin Riedmiller · Angela Schoellig · Rose Yu · Rupam Mahmood -
2021 Poster: Skill Discovery for Exploration and Planning using Deep Skill Graphs »
Akhil Bagaria · Jason Senthil · George Konidaris -
2021 Oral: Skill Discovery for Exploration and Planning using Deep Skill Graphs »
Akhil Bagaria · Jason Senthil · George Konidaris -
2020 Poster: Learning Portable Representations for High-Level Planning »
Steven James · Benjamin Rosman · George Konidaris -
2019 Poster: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2019 Oral: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2019 Poster: Discovering Options for Exploration by Minimizing Cover Time »
Yuu Jinnai · Jee Won Park · David Abel · George Konidaris -
2019 Oral: Discovering Options for Exploration by Minimizing Cover Time »
Yuu Jinnai · Jee Won Park · David Abel · George Konidaris -
2018 Poster: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Oral: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Poster: Lipschitz Continuity in Model-based Reinforcement Learning »
Kavosh Asadi · Dipendra Misra · Michael L. Littman -
2018 Oral: Lipschitz Continuity in Model-based Reinforcement Learning »
Kavosh Asadi · Dipendra Misra · Michael L. Littman -
2017 Poster: An Alternative Softmax Operator for Reinforcement Learning »
Kavosh Asadi · Michael L. Littman -
2017 Poster: Interactive Learning from Policy-Dependent Human Feedback »
James MacGlashan · Mark Ho · Robert Loftin · Bei Peng · Guan Wang · David L Roberts · Matthew E. Taylor · Michael L. Littman -
2017 Talk: Interactive Learning from Policy-Dependent Human Feedback »
James MacGlashan · Mark Ho · Robert Loftin · Bei Peng · Guan Wang · David L Roberts · Matthew E. Taylor · Michael L. Littman -
2017 Talk: An Alternative Softmax Operator for Reinforcement Learning »
Kavosh Asadi · Michael L. Littman