Position: AGI Requires a Coordination Layer on Top of Pattern Repositories
Abstract
This position paper argues that influential critiques dismissing Large Language Models (LLMs) as a dead end for AGI misidentify the bottleneck: they confuse the ocean with the net. Pattern repositories are the necessary System-1 substrate; the missing component is a System-2 coordination layer that selects, constrains, and binds these patterns. We formalize this layer via an anchoring theory that models reasoning as a phase transition governed by effective support (rhod), representational mismatch (dr), and an adaptive anchoring budget (gamma log k). We translate theory into architecture with a multi-agent coordination stack. Moving beyond the hype of unstructured swarms, this layer provides a principled integration of diversity and control via baiting (PID-modulated debate), filtering (trace-output verification), and persistence (transactional memory). Empirical validation on causal judgment and the sycophancy-paranoia trade-off demonstrates that static prompting fails where adaptive control succeeds, confirming that failures attributed to substrate limitations are often resolved by regulated coordination. By reframing common objections as testable coordination failures, we argue that the path to AGI runs through LLMs, not around them.