Poster
Fleet of Agents: Coordinated Problem Solving with Large Language Models
Lars Klein · Nearchos Potamitis · Roland Aydin · Robert West · Caglar Gulcehre · Akhil Arora
West Exhibition Hall B2-B3 #W-417
Large language models (LLMs), like GPT-4 and LLaMA, are powerful tools for solving complex problems. But making them reason well often comes with a trade-off between high costs and good quality. Our work introduces a new framework called Fleet of Agents (FoA), which cleverly balances cost and quality. Instead of relying on a single agent or blindly exploring many paths, FoA uses an approach inspired by "genetic" or "evolutionary" algorithms. FoA spawns many small agents to explore possible solutions and then selects the most promising ones to continue, much like nature, which favors the fittest.We tested FoA using various LLMs on tasks such as mathematical puzzles, crosswords, question answering, and online shopping. Across all tasks, FoA consistently resulted in better solution quality while substantially reducing the computational cost compared to existing methods.This means FoA helps AI systems reason more effectively and efficiently, making them more accessible, practical, and sustainable for a wide range of applications. We have made FoA publicly available so others can use and build on it.
Live content is unavailable. Log in and register to view live content