Timezone: »

 
Poster
Distilling Internet-Scale Vision-Language Models into Embodied Agents
Theodore R Sumers · Kenneth Marino · Arun Ahuja · Rob Fergus · Ishita Dasgupta

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #200

Instruction-following agents must ground language into their observation and action spaces. Learning to ground language is challenging, typically requiring domain-specific engineering or large quantities of human interaction data. To address this challenge, we propose using pretrained vision-language models (VLMs) to supervise embodied agents. We combine ideas from model distillation and hindsight experience replay (HER), using a VLM to retroactively generate language describing the agent's behavior. Simple prompting allows us to control the supervision signal, teaching an agent to interact with novel objects based on their names (e.g., planes) or their features (e.g., colors) in a 3D rendered environment. Fewshot prompting lets us teach abstract category membership, including pre-existing categories (food vs toys) and ad-hoc ones (arbitrary preferences over objects). Our work outlines a new and effective way to use internet-scale VLMs, repurposing the generic language grounding acquired by such models to teach task-relevant groundings to embodied agents.

Author Information

Theodore R Sumers (Princeton University)
Kenneth Marino (Google Deepmind)
Arun Ahuja (Deepmind)
Rob Fergus (Facebook / NYU)
Ishita Dasgupta (DeepMind)

More from the Same Authors