Poster
in
Workshop: Data-centric Machine Learning Research (DMLR): Datasets for Foundation Models
Data Shapley in One Training Run
Jiachen Wang · Prateek Mittal · Dawn Song · Ruoxi Jia
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts. However, existing approaches require re-training models on different data subsets, which is computationally intensive, foreclosing their application to large-scale models. Furthermore, they produce the same attribution score for any models produced by running the learning algorithm, meaning they cannot perform targeted attribution towards a specific model obtained from a single run of the algorithm. This paper introduces \emph{In-Run Data Shapley}, which addresses these limitations by offering scalable data attribution for a target model of interest. In its most efficient implementation, our technique incurs negligible additional runtime compared to standard model training. This dramatic efficiency improvement makes it possible to perform data attribution for the foundation model pre-training stage for the first time. We present several case studies that offer fresh insights into pre-training data's contribution and discuss their implications for copyright in generative AI and pretraining data curation.