Timezone: »

Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
Ling Pan · Longbo Huang · Tengyu Ma · Huazhe Xu

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #801

Conservatism has led to significant progress in offline reinforcement learning (RL) where an agent learns from pre-collected datasets. However, as many real-world scenarios involve interaction among multiple agents, it is important to resolve offline RL in the multi-agent setting. Given the recent success of transferring online RL algorithms to the multi-agent setting, one may expect that offline RL algorithms will also transfer to multi-agent settings directly. Surprisingly, we empirically observe that conservative offline RL algorithms do not work well in the multi-agent setting---the performance degrades significantly with an increasing number of agents. Towards mitigating the degradation, we identify a key issue that non-concavity of the value function makes the policy gradient improvements prone to local optima. Multiple agents exacerbate the problem severely, since the suboptimal policy by any agent can lead to uncoordinated global failure. Following this intuition, we propose a simple yet effective method, Offline Multi-Agent RL with Actor Rectification (OMAR), which combines the first-order policy gradients and zeroth-order optimization methods to better optimize the conservative value functions over the actor parameters. Despite the simplicity, OMAR achieves state-of-the-art results in a variety of multi-agent control tasks.

Author Information

Ling Pan (Tsinghua University)
Longbo Huang (Tsinghua University)
Tengyu Ma (Stanford)
Huazhe Xu (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors