Spotlight

Batch Value-function Approximation with Only Realizability

Tengyang Xie · Nan Jiang

[ Abstract ] [ Livestream: Visit Learning Theory 1 ] [ Paper ]
Wed 21 Jul 5:20 a.m. — 5:25 a.m. PDT

We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

Chat is not available.