Monte Carlo Tree Search for Bayesian Reinforcement Learning

Vien Ngo, Wolfgang Ertel

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment's dynamics is a product of Dirichlet distributions, the POMDP's optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.
Original languageEnglish
Title of host publication11th International Conference on Machine Learning and Applications (ICMLA)
Pages138-143
Number of pages6
DOIs
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Monte Carlo Tree Search for Bayesian Reinforcement Learning'. Together they form a unique fingerprint.

Cite this