Explaining Data Mixing Scaling Laws
Abstract
Recent research has established empirical scaling laws to predict model performance on multi-domain data mixtures. However, a theoretical understanding of these model loss behaviors remains limited. In this work, we propose a unified framework to explain the underlying mechanics of data mixing. Our approach extends theoretical perspectives originally developed for standard neural scaling laws (e.g., Kaplan and Chinchilla) to the multi-domain setting. Based on the distributional assumption that domains overlap on fundamental skills while diverging on specialized skills, we identify two key factors that decide the domain loss of models trained on different data mixtures: Capacity Competition, where the allocation of finite model capacity couples domain losses globally, and Noise Reduction, where optimal weights shift toward harder-to-learn domains to minimize variance. Experiments demonstrate that our framework fits the loss landscape with lower Mean Relative Error than existing empirical baselines and accurately predicts optimal training mixtures. Crucially, our model achieves these results using significantly fewer parameters.