MASPOB: Bandit-Based Prompt Optimization for Multi-Agent Systems with Graph Neural Networks
Abstract
Large Language Models (LLMs) have achieved significant success across a wide range of tasks, serving as the cognitive backbone for Multi-Agent Systems (MAS) designed to orchestrate complex practical workflows. Given that MAS performance is highly sensitive to input prompts and many deployment scenarios preclude MAS architecture modifications, prompt optimization emerges as a critical strategy for performance enhancement. However, real-world deployment is impeded by three key challenges: (1) the need for high sample efficiency due to prohibitive evaluation costs, (2) topology-induced coupling among prompts, and (3) the combinatorial explosion of the search space. To address these challenges, we introduce MASPOB (Multi-Agent System Prompt Optimization via Bandits), a novel sample-efficient framework based on bandits. By leveraging Upper Confidence Bound (UCB) to quantify uncertainty, the bandit framework balances exploration and exploitation, maximizing gains within a strictly limited budget. To handle topology-induced coupling, MASPOB integrates Graph Neural Networks (GNNs) to capture structural priors, learning topology-aware representations of prompt semantics. Furthermore, it employs coordinate ascent to decompose the optimization into univariate sub-problems, reducing search complexity from exponential to linear. Extensive experiments across diverse benchmarks demonstrate that MASPOB achieves state-of-the-art performance, consistently outperforming existing baselines.