Skip to yearly menu bar Skip to main content


Poster

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis

Yao Mu · Junting Chen · Qing-Long Zhang · Shoufa Chen · Qiaojun Yu · Chongjian GE · Runjian Chen · Zhixuan Liang · Mengkang Hu · Chaofan Tao · Peize Sun · Haibao Yu · Chao Yang · WENQI SHAO · Wenhai Wang · Jifeng Dai · Yu Qiao · Mingyu Ding · Ping Luo

Hall C 4-9 #2811
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Robotic behavior synthesis, the problem of understanding multimodal inputs and generating precise physical control for robots, is an important part of Embodied AI. Despite successes in applying multimodal large language models for high-level understanding, it remains challenging to translate these conceptual understandings into detailed robotic actions while achieving generalization across various scenarios. In this paper, we propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed RoboCodeX. RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints, and applies code generation to introduce generalization ability across various robotics platforms. To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning. Extensive experiments demonstrate that RoboCodeX achieves state-of-the-art performance in both simulators and real robots on four different kinds of manipulation tasks and one embodied navigation task.

Chat is not available.