Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reinforcement Learning for Real Life

Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks

Jaromír Janisch · Tomas Pevny · Viliam Lisy


Abstract:

We focus on reinforcement learning (RL) in relational problems that are naturally defined in terms of objects, their relations, and manipulations. These problems are characterized by variable state and action spaces, and finding a fixed-length representation, required by most existing RL methods, is difficult, if not impossible. We present a deep RL framework based on graph neural networks and auto-regressive policy decomposition that naturally works with these problems and is completely domain-independent. We demonstrate the framework in three very distinct domains and we report the method’s competitive performance and impressive zero-shot generalization over different problem sizes. In goal-oriented BlockWorld, we demonstrate multi-parameter actions with pre-conditions. In SysAdmin, we show how to select multiple objects simultaneously. In the classical planning domain of Sokoban, the method trained exclusively on 10×10 problems with three boxes solves 89% of 15×15 problems with five boxes.

Chat is not available.