Skip to yearly menu bar Skip to main content


Poster

Analysis of Stochastic Processes through Replay Buffers

Shirli Di-Castro Shashua · Shie Mannor · Dotan Di Castro

Hall E #827

Keywords: [ OPT: Sampling and Optimization ] [ RL: Average Cost/Reward ] [ MISC: General Machine Learning Techniques ] [ OPT: Stochastic ] [ RL: Everything Else ] [ Reinforcement Learning ]


Abstract:

Replay buffers are a key component in many reinforcement learning schemes. Yet, their theoretical properties are not fully understood. In this paper we analyze a system where a stochastic process X is pushed into a replay buffer and then randomly sampled to generate a stochastic process Y from the replay buffer. We provide an analysis of the properties of the sampled process such as stationarity, Markovity and autocorrelation in terms of the properties of the original process. Our theoretical analysis sheds light on why replay buffer may be a good de-correlator. Our analysis provides theoretical tools for proving the convergence of replay buffer based algorithms which are prevalent in reinforcement learning schemes.

Chat is not available.