Skip to yearly menu bar Skip to main content


Poster

Batch Value-function Approximation with Only Realizability

Tengyang Xie · Nan Jiang

Virtual

Keywords: [ Multitask and Transfer Learning ] [ Algorithms ] [ Algorithms -> Unsupervised Learning; Applications ] [ Image Segmentation ] [ RL, Decisions and Control Theory ]


Abstract:

We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

Chat is not available.