Timezone: »
Spotlight
Model Performance Scaling with Multiple Data Sources
Tatsunori Hashimoto
Real-world machine learning systems are often trained using a mix of data sources with varying cost and quality. Understanding how the size and composition of a training dataset affect model performance is critical for advancing our understanding of generalization, as well as designing more effective data collection policies. We show
that there is a simple scaling law that predicts the loss incurred by a model even under varying dataset composition. Our work expands recent observations of scaling laws for log-linear generalization error in the i.i.d setting and uses this to cast model performance prediction as a learning problem. Using the theory of optimal experimental design, we derive a simple rational function approximation to generalization error that can be fitted using a few model training runs. Our approach can achieve highly accurate ($r^2\approx .9$) predictions of model performance under substantial extrapolation in two different standard supervised learning tasks and is accurate ($r^2 \approx .83$) on more challenging machine translation and question answering tasks where many baselines achieve worse-than-random performance.
Author Information
Tatsunori Hashimoto (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Model Performance Scaling with Multiple Data Sources »
Thu. Jul 22nd 04:00 -- 06:00 AM Room
More from the Same Authors
-
2023 : Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks »
Daniel Kang · Xuechen Li · Ion Stoica · Carlos Guestrin · Matei Zaharia · Tatsunori Hashimoto -
2023 Poster: Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2023 Poster: Coder Reviewer Reranking for Code Generation »
Tianyi Zhang · Tao Yu · Tatsunori Hashimoto · Mike Lewis · Scott Yih · Daniel Fried · Sida Wang -
2023 Poster: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2023 Oral: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Out-of-Domain Robustness via Targeted Augmentations »
Irena Gao · Shiori Sagawa · Pang Wei Koh · Tatsunori Hashimoto · Percy Liang -
2022 Poster: Identifiability Conditions for Domain Adaptation »
Ishaan Gulrajani · Tatsunori Hashimoto -
2022 Spotlight: Identifiability Conditions for Domain Adaptation »
Ishaan Gulrajani · Tatsunori Hashimoto -
2020 Poster: Robustness to Spurious Correlations via Human Annotations »
Megha Srivastava · Tatsunori Hashimoto · Percy Liang -
2018 Poster: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang