Timezone: »

 
Spotlight
Provable Reinforcement Learning with a Short-Term Memory
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi

Tue Jul 19 07:45 AM -- 07:50 AM (PDT) @ Hall F

Real-world sequential decision making problems commonly involve partial observability, which requires the agent to maintain a memory of history in order to infer the latent states, plan and make good decisions. Coping with partial observability in general is extremely challenging, as a number of worst-case statistical and computational barriers are known in learning Partially Observable Markov Decision Processes (POMDPs). Motivated by the problem structure in several physical applications, as well as a commonly used technique known as "frame stacking", this paper proposes to study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length m. We establish a set of upper and lower bounds on the sample complexity for learning near-optimal policies for this class of problems in both tabular and rich-observation settings (where the number of observations is enormous). In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially with the short length m rather than the problem horizon, and is independent of the number of observations. Our results show that a short-term memory suffices for reinforcement learning in these environments.

Author Information

Yonathan Efroni (Microsoft Research, New York)
Chi Jin (Princeton University)
Akshay Krishnamurthy (Microsoft Research)
Sobhan Miryoosefi (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors