Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Localized Learning: Decentralized Model Updates via Non-Global Objectives

Auto-Aligning Multiagent Incentives with Global Objectives

Minae Kwon · John Agapiou · Edgar Duéñez-Guzmán · Romuald Elie · Georgios Piliouras · Kalesha Bullard · Ian Gemp

Keywords: [ Collective Intelligence ] [ Reward Sharing ] [ Price of Anarchy ] [ Multiagent Learning ]


Abstract: The general ability to achieve a singular task with a set of decentralized, intelligent agents is an important goal in multiagent research. The complex interaction between individual agents' incentives makes designing their objectives such that the resulting multiagent system aligns with a desired global goal particularly challenging. In this work, instead of considering the problem of designing suitable incentives from scratch, we assume a multiagent system with given preset incentives and consider $\textit{automatically modifying}$ these incentives online to achieve a new goal. This reduces the search space over possible individual incentives and takes advantage of theeffort instilled by the previous system designer. We demonstrate the promise as well as the limitations of re-purposing multiagent systems in this way, both theoretically and empirically, on a variety of domains. Surprisingly, we show that training a diverse multiagent system to align with a modified global objective ($g \rightarrow g')$ can, in at least one case, lead to better generalization performance in unseen test scenarios, when evaluated on the original objective ($g$).

Chat is not available.