Timezone: »

 
Poster
The Logical Options Framework
Brandon Araki · Xiao Li · Kiran Vodrahalli · Jonathan DeCastro · Micah Fry · Daniela Rus

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @ Virtual #None

Learning composable policies for environments with complex rules and tasks is a challenging problem. We introduce a hierarchical reinforcement learning framework called the Logical Options Framework (LOF) that learns policies that are satisfying, optimal, and composable. LOF efficiently learns policies that satisfy tasks by representing the task as an automaton and integrating it into learning and planning. We provide and prove conditions under which LOF will learn satisfying, optimal policies. And lastly, we show how LOF's learned policies can be composed to satisfy unseen tasks with only 10-50 retraining steps on our benchmarks. We evaluate LOF on four tasks in discrete and continuous domains, including a 3D pick-and-place environment.

Author Information

Brandon Araki (MIT)
Xiao Li (MIT)
Kiran Vodrahalli (Columbia University)
Jonathan DeCastro (Toyota Research Institute)
Micah Fry (MIT Lincoln Laboratory)
Daniela Rus (MIT CSAIL)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors