Skip to yearly menu bar Skip to main content


Poster

Implicit Generative Modeling for Efficient Exploration

Neale Ratzlaff · Qinxun Bai · Fuxin Li · Wei Xu

Keywords: [ Bayesian Deep Learning ] [ Deep Reinforcement Learning ] [ Reinforcement Learning ] [ Reinforcement Learning - Deep RL ]


Abstract:

Efficient exploration remains a challenging problem in reinforcement learning, especially for those tasks where rewards from environments are sparse. In this work, we introduce an exploration approach based on a novel implicit generative modeling algorithm to estimate a Bayesian uncertainty of the agent's belief of the environment dynamics. Each random draw from our generative model is a neural network that instantiates the dynamic function, hence multiple draws would approximate the posterior, and the variance in the predictions based on this posterior is used as an intrinsic reward for exploration. We design a training algorithm for our generative model based on the amortized Stein Variational Gradient Descent. In experiments, we demonstrate the effectiveness of this exploration algorithm in both pure exploration tasks and a downstream task, comparing with state-of-the-art intrinsic reward-based exploration approaches, including two recent approaches based on an ensemble of dynamic models. In challenging exploration tasks, our implicit generative model consistently outperforms competing approaches regarding data efficiency in exploration.

Chat is not available.