New Frontiers in Game-Theoretic Learning
Abstract
As Artificial Intelligence systems are increasingly deployed in high-impact, mixed-motive ecosystems, we are witnessing a paradigm shift from monolithic reasoning to strategic agency. However, a critical "translation gap" exists between classical foundational theory and modern AI practice. While classical game theory and mechanism design focus on long-run behaviors and static equilibria under explicit specifications, modern learning agents operate via non-stationary learning dynamics in unknown environments where traditional equilibria may be computationally intractable or dynamically irrelevant. Furthermore, while Large Language Models (LLMs) excel at parsing rich context, they often exhibit brittle strategic planning and exploitable biases when interacting in multi-agent settings. The NExT-Game workshop aims to bridge this gap by uniting the algorithmic game theory and machine learning communities. We seek to explore twofold frontiers: (i) theoretical frontiers, reimagining classical abstractions for high-dimensional, non-convex learning landscapes and characterizing principal-agent dynamics among boundedly rational, regret-minimizing learners; and (ii) applied frontiers, utilizing gamification and self-play as cognitive scaffolding to ground LLM hallucinations and addressing the systemic risks of "algorithmic monoculture". By fostering dialogue between theoreticians and practitioners, this workshop will chart concrete research directions to couple strategic stability with realistic multi-agent learning dynamics, ultimately informing robust and incentive-compatible emerging AI policy.