Oral
Neural Logic Reinforcement Learning
zhengyao jiang · Shan Luo

Tue Jun 11th 02:30 -- 02:35 PM @ Room 104

Deep reinforcement learning (DRL) has achieved significant breakthroughs in various tasks. However, most DRL algorithms suffer a problem of generalising the learned policy which makes the learning performance largely affected even by minor modifications of the training environment. Except that, the use of deep neural networks makes the learned policies hard to be interpretable. To tackle these two challenges, we propose a novel algorithm named Neural Logic Reinforcement Learning (NLRL) to represent the policies in the reinforcement learning by first order logic. NLRL is based on policy gradient methods and differentiable inductive logic programming that have demonstrated significant advantages in terms of interpretability and generalisability in supervised tasks. Extensive experiments conducted on cliff-walking and blocks manipulation tasks demonstrate that NLRL can induce interpretable policies achieving near-optimal performance, while demonstrating good generalisability to environments of different initial states and problem sizes.

Author Information

zhengyao jiang (University of Liverpool)
Shan Luo (University of Liverpool)

Related Events (a corresponding poster, oral, or spotlight)