Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives
Power Mean Estimation in Stochastic Monte-Carlo Tree Search
DAM Tuan · Odalric-Ambrym Maillard · Emilie Kaufmann
Abstract:
Monte-Carlo Tree Search (MCTS) is a widely-used strategy for online planning that combines Monte-Carlo sampling with forward tree search. Its success relies on the Upper Confidence bound for Trees (UCT) algorithm, an extension of the UCB method for multi-arm bandits. However, the theoretical foundation of UCT is incomplete due to an error in the logarithmic bonus term for action selection, leading to the development of Fixed-Depth-MCTS with a polynomial exploration bonus to balance exploration and exploitation. Both UCT and Fixed-Depth-MCTS suffer from biased value estimation: the weighted sum underestimates the optimal value, while the maximum valuation overestimates it. The power mean estimator offers a balanced solution, lying between the average and maximum values. Power-UCT incorporates this estimator for more accurate value estimates, but its theoretical proof remains incomplete. This paper introduces Stochastic-Power-UCT, an MCTS algorithm for stochastic MDPs using the power mean estimator. We analyze its polynomial convergence in estimating root node values and show that it shares the same convergence rate of $\mathcal{O}(n^{-1/2})$, with $n$ is the number of visited trajectories, as Fixed-Depth-MCTS, with the latter being a special case of the former. Our theoretical results are validated with empirical tests across various stochastic MDP environments.
Chat is not available.