Skip to yearly menu bar Skip to main content


Poster

Zero-Shot Reward Specification via Grounded Natural Language

Parsa Mahmoudieh · Deepak Pathak · Trevor Darrell

Hall E #838

Keywords: [ RL: Average Cost/Reward ] [ RL: Deep RL ]


Abstract:

Reward signals in reinforcement learning are expensive to design and often require access to the true state which is not available in the real world. Common alternatives are usually demonstrations or goal images which can be labor-intensive to collect. On the other hand, text descriptions provide a general, natural, and low-effort way of communicating the desired task. However, prior works in learning text-conditioned policies still rely on rewards that are defined using either true state or labeled expert demonstrations. We use recent developments in building large-scale visuolanguage models like CLIP to devise a framework that generates the task reward signal just from goal text description and raw pixel observations which is then used to learn the task policy. We evaluate the proposed framework on control and robotic manipulation tasks. Finally, we distill the individual task policies into a single goal text conditioned policy that can generalize in a zero-shot manner to new tasks with unseen objects and unseen goal text descriptions.

Chat is not available.