Great Minds Think Alike: Contextual Tacit Communication for Decentralized LLM-Agent Cooperation
Abstract
Large language models (LLMs) are increasingly used as planners for cooperative embodied agents, but multi-agent settings amplify inconsistency under partial observability and make explicit communication costly or even unavailable. Many existing approaches rely on online message passing; when communication is removed, agents often fall back to independent local planning that suffers from tacit miscoordination. We introduce Contextual Tacit Communication, a training-free protocol that aligns decentralized decisions with a joint LLM value score without explicit message actions. Our method measures context-conditioned value rectifications via residual banding to pinpoint miscoordination actions and amortizes the resulting coordination signals into a retrieval-augmented Tacit Rule Memory that provides prompt-level cooperation rules at execution time. Experiments on VIKI, C-WAH, and TDW-MAT show that our approach improves cooperation performance over baselines while reducing runtime overhead compared with communication-based methods.