Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Decision Awareness in Reinforcement Learning

CoMBiNED: Multi-Constrained Model Based Planning for Navigation in Dynamic Environments

Harit Pandya · Rudra Poudel · Stephan Liwicki


Abstract:

Recent model based planning approaches have attained a huge success on Atari games. However, learning accurate models for complex robotics scenarios such as navigation directly from high dimensional sensory measurements requires a huge amount of data and training. Furthermore, even a small change on robot configuration such as kino-dynamics or sensor in the inference time requires re-training of the policy. In this paper, we address these issues in a principled fashion through a \textit{multi-constraint model based online planning} (CoMBiNED) framework that does not require any retaining or modifications on the existing policy. We disentangle the given task into sub-tasks and learn dynamical models for them. Treating these dynamical models as soft-constraints, we employ stochastic optimisation to employ cross entropy method to jointly optimize these sub-tasks on-the-fly. We Consider navigation as central application in this work and evaluate our approach on publicly available benchmark with complex dynamic scenarios and achieved significant improvement over recent approaches both in the cases of with-and-without given map of the environment.

Chat is not available.