Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multi-modal Foundation Model meets Embodied AI (MFM-EAI)

Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments

Siddharth Nagar Nayak · Adelmo Orozco · Marina Have · Jackson Zhang · Vittal Thirumalai · Darren Chen · Aditya Kapoor · Eric Robinson · Karthik Gopalakrishnan · James Harrison · Anuj Mahajan · brian ichter · Hamsa Balakrishnan


Abstract:

Language Models (LMs) excel in understanding natural language which makes them a powerful tool for parsing human instructions into task plans for autonomous agents. Unlike traditional planning methods that rely on domain knowledge and handcrafted rules, LMs generalize from diverse data and adapt to various tasks with minimal tuning, acting as a compressed knowledge base. However, LMs in their standard form face challenges with long-horizon tasks, particularly in partially observable multi-agent settings. We propose an LM-based Long-Horizon Planner for Multi-Agent Robotics (LLaMAR), a cognitive architecture that employs a plan-act-correct-verify framework. It achieves state-of-the-art results in partially observable long-horizon planning tasks without relying on privileged information from oracles. Experiments show that LLaMAR achieves a 30\% higher success rate compared to other state-of-the-art LM-based multi-agent planners in household tasks of varying complexity in the AI2-THOR environment.

Chat is not available.