Towards Understanding Continual Factual Knowledge Acquisition of Language Models: From Theory to Algorithm
Abstract
Continual Pre-Training (CPT) is essential for enabling Language Models (LMs) to integrate new factual knowledge without erasing old. While classical CPT techniques like data replay have become the standard paradigm, the mechanisms underlying how LMs acquire and retain facts over time, termed as continual Factual Knowledge Acquisition (cFKA), remain unclear. In this work, we present a theoretical framework that characterizes the training dynamics of cFKA using a single-layer Transformer with linear attention, offering a unified explanation for the behavior of popular CPT methods. Our analysis reveals that regularization-based methods merely adjust the convergence rate of parameters without altering the inherent forgetting tendency, whereas data replay methods shift convergence dynamics and stabilize pretrained knowledge. Building on these insights, we propose a novel generative data replay approach, called Selecting Tokens via attentiOn Contribution (STOC), which identifies influential factual snippets to guide replay generation. Extensive experiments on both synthetic and real-world datasets validate our theoretical findings and demonstrate that STOC effectively enhances cFKA by mitigating catastrophic forgetting.