Skip to yearly menu bar Skip to main content


Poster

Pausing Policy Learning in Non-stationary Reinforcement Learning

Hyunin Lee · Ming Jin · Javad Lavaei · Somayeh Sojoudi

Hall C 4-9 #1313
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT
 
Oral presentation: Oral 3A Reinforcement Learning 1
Wed 24 Jul 1:30 a.m. PDT — 2:30 a.m. PDT

Abstract:

Real-time inference is a challenge of real-world reinforcement learning due to temporal differences in time-varying environments: the system collects data from the past, updates the decision model in the present, and deploys it in the future. We tackle a common belief that continually updating the decision is optimal to minimize the temporal gap. We propose forecasting an online reinforcement learning framework and show that strategically pausing decision updates yields better overall performance by effectively managing aleatoric uncertainty. Theoretically, we compute an optimal ratio between policy update and hold duration, and show that a non-zero policy hold duration provides a sharper upper bound on the dynamic regret. Our experimental evaluations on three different environments also reveal that a non-zero policy hold duration yields higher rewards compared to continuous decision updates.

Chat is not available.