Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multi-modal Foundation Model meets Embodied AI (MFM-EAI)

The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts

Wakana Haijima · KOU NAKAKUBO · Masahiro Suzuki · Yutaka Matsuo


Abstract:

In recent years, as machine learning, particularly for vision and language understanding, has been improved, research in embodied AI has also evolved. VOYAGER is a well-known LLM-based embodied AI that enables autonomous exploration in the Minecraft world. Still, it has issues such as the underutilization of visual data and insufficient functionality as a world model. In this research, the possibility of utilizing visual data and the function of LLM as a world model were investigated to improve the performance of embodied AI. The experimental results revealed that LLM can extract necessary information from visual data, and the utilization of the information improves its performance. It was also suggested that devised prompts could bring out the LLM’s function as a world model.

Chat is not available.