Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
Abstract
Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach lowers the peak activation memory usage by as much as 82.5% for 70B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support maximum context lengths of up to 5M tokens for training 8B models on a single 8xH100 node, improving upon prior methods by 25%.