Skip to yearly menu bar Skip to main content


Poster

Improving Model Selection by Employing the Test Data

Max Westphal · Werner Brannath

Pacific Ballroom #123

Keywords: [ Supervised Learning ] [ Robust Statistics and Machine Learning ] [ Optimization - Others ] [ Healthcare ] [ Approximate Inference ]


Abstract:

Model selection and evaluation are usually strictly separated by means of data splitting to enable an unbiased estimation and a simple statistical inference for the unknown generalization performance of the final prediction model. We investigate the properties of novel evaluation strategies, namely when the final model is selected based on empirical performances on the test data. To guard against selection induced overoptimism, we employ a parametric multiple test correction based on the approximate multivariate distribution of performance estimates. Our numerical experiments involve training common machine learning algorithms (EN, CART, SVM, XGB) on various artificial classification tasks. At its core, our proposed approach improves model selection in terms of the expected final model performance without introducing overoptimism. We furthermore observed a higher probability for a successful evaluation study, making it easier in practice to empirically demonstrate a sufficiently high predictive performance.

Live content is unavailable. Log in and register to view live content