PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training
Abstract
Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit structure information of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm’s fast convergence and the requirements for subspace projection and show a compression should yield an unbiased estimate of the original activation with low varianc. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with \emph{minimum} variance under suitable conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36\% total memory reduction with negligible performance degradation and minimal computational cost.