Skip to yearly menu bar Skip to main content


Poster

In-Context Reinforcement Learning with Hierarchical Chain of Experience

sili huang · Jifeng Hu · Hechang Chen · Lichao Sun · Bo Yang


Abstract: In-context learning is a promising approach for offline reinforcement learning (RL) to handle online tasks, which can be achieved by providing task prompts. Recent works demonstrated that in-context RL could emerge with self-improvement in a trial-and-error manner when treating RL tasks as an across-episodic sequential prediction problem. Despite the self-improvement not requiring gradient updates, current works still suffer from high computational costs when the across-episodic sequence increases with task horizons. To this end, we propose a Hierarchical Chain of experience Context (H2C) to achieve self-improvement in a high-level trial-and-error manner. Specifically, H2C is inspired by the efficient hierarchical structure of human decision-making and thus reconstructs the sequence to consist of high-level decisions instead of low-level actions that interact with environments. As one high-level decision can guide multi-step low-level actions, H2C naturally avoids excessively long sequences and solves online tasks more efficiently. Experimental results show that H2C achieves state-of-the-art in long-horizon tasks over current in-context RL methods. In particular, the online evaluation time of our H2C is 36$\times$ times faster than baselines in the D4RL benchmark and 27$\times$ times faster in the Grid World benchmark.

Live content is unavailable. Log in and register to view live content