Skip to yearly menu bar Skip to main content


Poster

TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning

Tameem Adel · Adrian Weller

Pacific Ballroom #35

Keywords: [ Transfer and Multitask Learning ] [ Graphical Models ] [ Deep Reinforcement Learning ]


Abstract:

One of the challenges to reinforcement learning (RL) is scalable transferability among complex tasks. Incorporating a graphical model (GM), along with the rich family of related methods, as a basis for RL frameworks provides potential to address issues such as transferability, generalisation and exploration. Here we propose a flexible GM-based RL framework which leverages efficient inference procedures to enhance generalisation and transfer power. In our proposed transferable and information-based graphical model framework ‘TibGM’, we show the equivalence between our mutual information-based objective in the GM, and an RL consolidated objective consisting of a standard reward maximisation target and a generalisation/transfer objective. In settings where there is a sparse or deceptive reward signal, our TibGM framework is flexible enough to incorporate exploration bonuses depicting intrinsic rewards. We empirically verify improved performance and exploration power.

Live content is unavailable. Log in and register to view live content