Understanding Data Temporality Impact on Large Language Models Pre-training
Abstract
Large language models (LLMs) are typically trained on shuffled corpora, yielding models whose knowledge is frozen at training time and whose temporal grounding remains poorly understood. In this work, we study the impact of pretraining dynamics on the acquisition of time-sensitive factual knowledge, focusing specifically on data ordering. Our main contributions are twofold. First, we introduce a comprehensive benchmark of over 7,000 temporally grounded questions and an evaluation protocol that enables analysis of whether models correctly associate facts with their corresponding time periods. Second, we pretrain 6B-parameter language models on temporally ordered Common Crawl snapshots and compare them against standard shuffled pretraining. Our results show that sequentially trained models match shuffled baselines on general language understanding and common knowledge while consistently exhibiting more up-to-date and temporally precise knowledge. Temporally ordered pretraining yields improved factual freshness, while shuffled pretraining peaks on older data, possibly due to increased factual repetition. These findings, along with the release of our checkpoints and datasets, provide a foundation for future research on continual learning for large language models.