When LLMs Develop Languages: Symbolic Communication for Efficient Multi-Agent Reasoning
Zhengqi Pei ⋅ Qingming Huang ⋅ Shuhui Wang
Abstract
Chain-of-Thought (CoT) prompting improves large language models (LLMs) on difficult reasoning tasks, but it generates long natural-language rationales that are poorly optimized towards higher-level machine efficiency and intelligence. We propose *Communicative Language Symbolism Routing* (CLSR), a test-time framework in which multiple LLM agents autonomously *invent, evolve, and share* compact *Language Symbolism Frameworks* (LSFs), and a latent-free router adaptively selects and composes these languages per query to optimize the accuracy--token budget trade-off. Unlike prompt optimization that refines surface instructions, CLSR treats each LSF as a reusable symbolic protocol and improves it through an evolutionary loop. In inference, the router may invoke a single low-cost LSF call, ensemble multiple dialects with aggregation, or execute a multi-round composition protocol on harder queries. Across challenging benchmarks, CLSR reduces latency-oriented token completion by $3{-}6\times$ compared to standard CoT while maintaining accuracy, outperforming other token-reduction and prompt optimization baselines. We further theoretically (i) yield an information-theoretic lower bound relating accuracy and tokens under arbitrary symbolism, and (ii) characterize the CLSR protocols as a generalization of program-execution pipelines.
Successful Page Load