Skip to yearly menu bar Skip to main content


Presentation
in
Workshop: Workshop on Reinforcement Learning Theory

Invited Speaker: Art Owen: Empirical likelihood for reinforcement learning


Abstract:

Empirical likelihood (EL) is a statistical method that uses a nonparametric likelihood function. It allows one to construct confidence intervals without having to specify that the data come from some parametric distribution such as a Gaussian. Operationally, EL involves a strategic reweighting of observed data to attain its goals. This makes it similar to importance sampling and self normalized importance sampling, both widely used for off policy evaluation in reinforcement learning (RL). Recently EL has been used in off policy evaluation and in distributionally robust inference. This talk gives the basic motivation and some results about EL thought to be useful for RL: (a) EL inferences can be as or more powerful than their corresponding parametric counterparts, depending on how one keeps score. (b) There is a natural way to incorporate sampling bias via reweighting. (c) One can exploit side knowledge expresed as some known expected values. (d) An empirical likelihood can be paired with a prior distribution to get Bayesian inferences on quantities of interest without having to choose a parameteric distribution.