Learning Multi-Agent Coordination via Sheaf-ADMM
Abstract
We present a differentiable optimization framework for multi-agent coordination. An input is decomposed into overlapping local views, each processed by an agent that solves a convex subproblem parametrized by a neural encoder. Agents coordinate through the Alternating Direction Method of Multipliers (ADMM) with inter-agent constraints specified by a cellular sheaf. The sheaf specifies which aspects of neighboring solutions must agree. Backpropagating through the unrolled optimization jointly trains encoders, decoders, and sheaf structure. We evaluate on maze pathfinding, image classification, and Sudoku, where agents with individually insufficient local views coordinate to produce correct global outputs. We show that this locality also yields improved robustness to distribution shifts (padding, missing patches, and noise) when evaluated against a standard CNN on MNIST, while exposing interpretable primal/consensus/dual variables that make the coordination dynamics directly inspectable.