Skip to yearly menu bar Skip to main content


Poster
in
Workshop: PAC-Bayes Meets Interactive Learning

Bayesian Risk-Averse Q-Learning with Streaming Data

Yuhao Wang · Enlu Zhou


Abstract:

We consider a robust reinforcement learning problem, where a learning agent learns from a simulated training environment. We adopt a formulation of Bayesian risk MDP (BRMDP) with infinite horizon, which uses Bayesian posterior to estimate the transition model and impose a risk functional to account for the model uncertainty. Observations from the real environment that is out of the agent's control arrive periodically and are utilized by the agent to update the Bayesian posterior to reduce model uncertainty. We theoretically demonstrate that BRMDP balances the trade-off between robustness and conservativeness, and we further develop a multi-stage Bayesian risk-averse Q-learning algorithm with a provable performance guarantee to solve BRMDP with streaming observations from real environment.

Chat is not available.