AReaL-DTA: Dynamic Tree Attention for Efficient Reinforcement Learning of Large Language Models
Jiarui Zhang ⋅ Yuchen Yang ⋅ Ran Yan ⋅ Zhiyu Mei ⋅ Liyuan Zhang ⋅ LiDaifeng ⋅ Wei Fu ⋅ Jiaxuan Gao ⋅ Shusheng Xu ⋅ Yi Wu ⋅ Binhang Yuan
Abstract
Reinforcement learning (RL) based post-training for large language models (LLMs) is computationally expensive, as it generates many rollout sequences that could frequently share long token prefixes. Existing RL frameworks usually process these sequences independently, repeatedly recomputing identical prefixes during forward and backward passes during policy model training, leading to substantial inefficiencies in computation and memory usage. Although prefix sharing naturally induces a tree structure over rollouts, prior tree-attention–based solutions rely on fully materialized attention masks and scale poorly in RL settings. In this paper, we introduce AReaL-DTA to efficiently exploit prefix sharing in RL training. AReaL-DTA employs a depth-first-search (DFS)–based execution strategy that dynamically traverses the rollout prefix tree during both forward and backward computation, materializing only a single root-to-leaf path at a time. To further improve scalability, AReaL-DTA incorporates a load-balanced distributed batching mechanism that dynamically constructs and processes prefix trees across multiple GPUs. Across the popular RL post-training workload, AReaL-DTA achieves up to $8.31\times$ in $\tau^2$-bench higher training throughput while reducing peak GPU memory consumption by approximately 30–40%.
Successful Page Load