Poster
in
Workshop: Reinforcement Learning for Real Life
Learning a Markov Model for evaluating Soccer Decision Making
Maaike Van Roy · Pieter Robberechts · Wen-Chi Yang · Luc De Raedt · Jesse Davis
Reinforcement learning techniques are often used to model and analyze the behavior of sports teams and players. However, learning these models from observed data is challenging. The data is very sparse and does not include the intended end location of actions which are needed to model decision making. Evaluating the learned models is also extremely difficult as no ground truth is available. In this work, we propose an approach that addresses these challenges when learning a Markov model of professional soccer matches from event stream data. We apply a combination of predictive modelling and domain knowledge to obtain the intended end locations of actions and learn the transition model using a Bayesian approach to resolve sparsity issues. We provide intermediate evaluations as well as an approach to evaluate the final model. Finally, we show the model's usefulness in practice for both evaluating and rating players' decision making using data from the 17/18 and 18/19 English Premier League seasons.