Timezone: »

 
Poster
PaLM-E: An Embodied Multimodal Language Model
Danny Driess · Fei Xia · Mehdi S. M. Sajjadi · Corey Lynch · Aakanksha Chowdhery · Brian Ichter · Ayzaan Wahid · Jonathan Tompson · Quan Vuong · Tianhe (Kevin) Yu · Wenlong Huang · Yevgen Chebotar · Pierre Sermanet · Daniel Duckworth · Sergey Levine · Vincent Vanhoucke · Karol Hausman · Marc Toussaint · Klaus Greff · Andy Zeng · Igor Mordatch · Pete Florence

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #237
Event URL: https://palm-e.github.io/ »

Large language models excel at a wide range of complex tasks. However, enabling general inference in the real world, e.g. for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language model are multimodal sentences that interleave visual, continuous state estimation, and textual input encodings. We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks including sequential robotic manipulation planning, visual question answering, and captioning. Our evaluations show that PaLM-E, a single large embodied multimodal model, can address a variety of embodied reasoning tasks, from a variety of observation modalities, on multiple embodiments, and further, exhibits positive transfer: the model benefits from diverse joint training across internet-scale language, vision, and visual-language domains. Our largest model with 562B parameters, in addition to being trained on robotics tasks, is a visual-language generalist with state-of-the-art performance on OK-VQA, and retains generalist language capabilities with increasing scale.

Author Information

Danny Driess (TU Berlin)
Fei Xia (Google DeepMind)
Mehdi S. M. Sajjadi (Google)
Corey Lynch (Google)
Aakanksha Chowdhery (Google DeepMind)
Brian Ichter (Google DeepMind)
Ayzaan Wahid
Jonathan Tompson (Google Brain)
Quan Vuong (University of California San Diego)
Tianhe (Kevin) Yu (Google DeepMind)
Wenlong Huang (Stanford University)
Yevgen Chebotar (Google DeepMind)
Pierre Sermanet (Google)
Daniel Duckworth (Google Brain)
Sergey Levine (University of Washington)
Vincent Vanhoucke (Google DeepMind)
Vincent Vanhoucke

Vincent Vanhoucke is a Distinguished Scientist and Senior Director of Robotics at Google DeepMind. His research has spanned many areas of artificial intelligence and machine learning, from speech recognition to deep learning, computer vision, and robotics. His Udacity lecture series has introduced over 100,000 students to Deep Learning. He is President of the Robot Learning Foundation, which organizes the Conference on Robot Learning, now in its seventh year. He holds a doctorate from Stanford University and a diplôme d'ingénieur from the École Centrale Paris.

Karol Hausman (Google Brain)
Marc Toussaint (TU Berlin)
Klaus Greff (Google Brain)
Andy Zeng (Google)
Igor Mordatch (Research, Google)
Pete Florence (Google)

More from the Same Authors