Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multi-modal Foundation Model meets Embodied AI (MFM-EAI)

RoboGolf: Mastering Real-World Minigolf with a Reflective Multi-Modality Vision-Language Model

Hantao Zhou · Tianying Ji · Lukas Sommerhalder · Michael Görner · Norman Hendrich · Fuchun Sun · Jianwei Dr. Zhang · Huazhe Xu


Abstract:

Minigolf is an exemplary real-world game for examining embodied intelligence, requiring challenging spatial and kinodynamic understanding to putt the ball. Additionally, reflective reasoning is required if the feasibility of a challenge is not ensured. We introduce RoboGolf, a VLM-based framework that combines dual-camera perception with closed-loop action refinement, augmented by a reflective equilibrium loop. The core of both loops is powered by finetuned VLMs. We analyze the capabilities of the framework in an offline inference setting, relying on an extensive set of recorded trajectories. Exemplary demonstrations of the analyzed problem domain are available at https://robogolfvlm.github.io/.

Chat is not available.