Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Reinforcement Learning Theory

Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature

Kefan Dong · Jiaqi Yang · Tengyu Ma


Abstract:

This paper studies model-based bandit and reinforcement learning (RL) with nonlinear function approximations. We propose to study convergence to approximate local maxima because we show that global convergence is statistically intractable even for one-layer neural net bandit with a deterministic reward. For both nonlinear bandit and RL, the paper presents a model-based algorithm, Virtual Ascent with Online Model Learner (ViOlin), which provably converges to a local maximum with sample complexity that only depends on the sequential Rademacher complexity of the model class. Our bounds imply novel results on several concrete settings such as linear bandit with finite model class or sparse models, and two-layer neural net bandit. A key algorithmic insight is that optimism may lead to overexploration even for one-layer neural net model class. On the other hand, for convergence to local maxima, it suffices to maximize the virtual return if the model can also predict the size of the gradient and Hessian of the return.

Chat is not available.