Skip to yearly menu bar Skip to main content


Validity, Reliability, and Significance: A Tutorial on Statistical Methods for Reproducible Machine Learning

Stefan Riezler · Michael Hagmann

Moderator : Pin-Yu Chen

Room 307


Scientific progress in machine learning is driven by empirical studies that evaluate the relative quality of models. The goal of such an evaluation is to compare machine learning methods themselves, not to reproduce single test-set evaluations of particular optimized instances of trained models. The practice of reporting performance scores of single best models is particularly inadequate for deep learning because of a strong dependence of their performance on various sources of randomness. Such an evaluation practice raises methodological questions of whether a model predicts what it purports to predict(validity), whether a model’s performance is consistent across replications of the training process (reliability), and whether a performance difference between two models is due to chance (significance). The goal oft his tutorial is to provide answers to these questions by concrete statistical tests. The tutorial is hands-on and accompanied by a textbook (Riezler and Hagmann,2021) and a webpage including R and Python code:

Chat is not available.