One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control

Wenlong Huang · Igor Mordatch · Deepak Pathak

Keywords: [ Deep Reinforcement Learning ] [ Multiagent Learning ] [ Robotics ] [ Reinforcement Learning ] [ Reinforcement Learning - Deep RL ]

[ Abstract ] [ Join Zoom
[ Slides
Please do not share or post zoom links


Reinforcement learning is typically concerned with learning control policies tailored to a particular agent. We investigate whether there exists a single policy that generalizes to controlling a wide variety of agent morphologies -- ones in which even dimensionality of state and action spaces changes. Such a policy would distill general and modular sensorimotor patterns that can be applied to control arbitrary agents. We propose a policy expressed as a collection of identical modular neural networks for each of the agent's actuators. Every module is only responsible for controlling its own actuator and receives information from its local sensors. In addition, messages are passed between modules, propagating information between distant modules. A single modular policy can successfully generate locomotion behaviors for over 20 planar agents with different skeletal structures such as monopod hoppers, quadrupeds, bipeds, and generalize to variants not seen during training -- a process that would normally require training and manual hyperparameter tuning for each morphology. We observe a wide variety of drastically diverse locomotion styles across morphologies as well as centralized coordination emerging via message passing between decentralized modules purely from the reinforcement learning objective. Video and code:

Chat is not available.