Timezone: »
Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.
Author Information
Hoai-An Nguyen (Rutgers University)
Hello! I am a current undergraduate student at Rutgers University, New Brunswick. I am super fortunate to be working under the direction of Sepehr Assadi. This past summer, I was extremely fortunate to intern under Ching-An Cheng at Microsoft Research. My main research interests are broadly in the design and analysis of algorithms, complexity theory, and machine learning theory. I am also very passionate about teaching. I am currently a learning assistant for the Data Structures course at Rutgers University. I was previously the head learning assistant, and was also previously a TA for the Computer Algorithms course and an LA for the Introduction to CS course.
Ching-An Cheng (Microsoft Research)
More from the Same Authors
-
2023 : Survival Instinct in Offline Reinforcement Learning and Implicit Human Bias in Data »
Anqi Li · Dipendra Misra · Andrey Kolobov · Ching-An Cheng -
2023 Poster: MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations »
Anqi Li · Byron Boots · Ching-An Cheng -
2023 Poster: Hindsight Learning for MDPs with Exogenous Inputs »
Sean R. Sinclair · Felipe Vieira Frujeri · Ching-An Cheng · Luke Marshall · Hugo Barbalho · Jingling Li · Jennifer Neville · Ishai Menache · Adith Swaminathan -
2022 Poster: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2022 Oral: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2021 Poster: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng -
2021 Spotlight: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng