Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Reinforcement Learning Theory

Provable RL with Exogenous Distractors via Multistep Inverse Dynamics

Yonathan Efroni · Dipendra Misra · Akshay Krishnamurthy · Alekh Agarwal · John Langford


Abstract:

Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. However, such approaches can fail in the presence of structured noise that is temporally correlated, a phenomenon that is common in practice. We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP, for rich observation RL. We establish structural results and provide new sample efficiency guarantees for learning in these models. For the latter, we show that the new DPCID variant of PCID, previously studied by~\citet{du2019provably}, learns a generalization of inverse dynamics and is provably sample and computationally efficient in Exogenous Block MDPs when the endogenous state dynamics are near deterministic. The sample complexity of DPCID depends polynomially on the size of a latent endogenous state space while not directly depending on the size of the observation space. We provide experiments on challenging exploration problems and show the promise of our approach.

Chat is not available.