Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning
Jihwan Oh · Sungnyun Kim · Gahee Kim · SeongHwan Kim · Se-Young Yun
Keywords: [ offline reinforcement learning ] [ Diffusion ] [ multi-agent learning ] [ augmentation ]
Abstract:
Offline multi-agent reinforcement learning (MARL) is increasingly recognized as crucial for effectively deploying RL algorithms in environments where real-time interaction is impractical, risky, or costly. In the offline setting, learning from a static dataset of past interactions allows for the development of robust and safe policies without the need for live data collection, which can be fraught with challenges. Building on this foundational importance, we present EAQ, Episodes Augmentation guided by Q-total loss, a novel approach for offline MARL framework utilizing diffusion models. EAQ integrates the Q-total function directly into the diffusion model as a guidance to maximize the global returns in an episode, eliminating the need for separate training. Our focus primarily lies on cooperative scenarios, where agents are required to act collectively towards achieving a shared goal—essentially, maximizing global returns. Consequently, we demonstrate that our episodes augmentation in a collaborative manner significantly boosts offline MARL algorithm compared to the original dataset, improving the normalized return by +17.3% and +12.9% for $medium$ and $poor$ behavioral policies in SMAC simulator, respectively.
Chat is not available.