Skip to yearly menu bar Skip to main content


Poster

LLM-Empowered State Representation for Reinforcement Learning

Boyuan Wang · Yun Qu · Yuhang Jiang · Jianzhun Shao · Chang Liu · Wenming Yang · Xiangyang Ji


Abstract:

Conventional state representations in reinforcement learning often omit critical task-related details, presenting a significant challenge for value networks in establishing accurate mappings from states to task rewards. Traditional methods typically depend on extensive sample learning to enrich state representations with task-specific information, which leads to low sample efficiency and high time costs. Recently, surging knowledgeable large language models (LLMs) have provided promising substitutes for prior injection with minimal human intervention. Motivated by this, we propose LLM-Empowered State Representation (LESR), a novel approach that utilizes LLM to autonomously generate task-related state representation codes which help to enhance the continuity of network mappings and facilitate efficient training. Experimental results demonstrate LESR exhibits high sample efficiency and outperforms state-of-the-art baselines by an average of 29% in accumulated reward in Mujoco tasks and 30% in success rates in Gym-Robotics tasks. Codes of LESR are accessible at https://anonymous.4open.science/r/LESR.

Live content is unavailable. Log in and register to view live content