Timezone: »

Learning in POMDPs with Monte Carlo Tree Search
Sammie Katt · Frans A Oliehoek · Chris Amato

Mon Aug 07 01:30 AM -- 05:00 AM (PDT) @ Gallery #115

The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation and exploration. Unfortunately, BA-POMDPs are currently impractical to solve for any non-trivial domain. In this paper, we extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs and show that the resulting method, which we call BA-POMCP, is able to tackle problems that previous solution methods have been unable to solve. Additionally, we introduce several techniques that exploit the BA-POMDP structure to improve the efficiency of BA-POMCP along with proof of their convergence.

Author Information

Sammie Katt (Northeastern University)
Frans A Oliehoek (University of Liverpool)
Chris Amato (Northeastern University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors