Timezone: »

Small Data, Big Decisions: Model Selection in the Small-Data Regime
Jorg Bornschein · Francesco Visin · Simon Osindero

Wed Jul 15 02:00 PM -- 02:45 PM & Thu Jul 16 03:00 AM -- 03:45 AM (PDT) @ None #None

Highly overparametrized neural networks can display curiously strong generalization performance -- a phenomenon that has recently garnered a wealth of theoretical and empirical research in order to better understand it. In contrast to most previous work, which typically considers the performance as a function of the model size, in this paper we empirically study the generalization performance as the size of the training set varies over multiple orders of magnitude. These systematic experiments lead to some interesting and potentially very useful observations; perhaps most notably that training on smaller subsets of the data can lead to more reliable model selection decisions whilst simultaneously enjoying smaller computational overheads. Our experiments furthermore allow us to estimate Minimum Description Lengths for common datasets given modern neural network architectures, thereby paving the way for principled model selection taking into account Occams-razor.

Author Information

Jorg Bornschein (DeepMind)
Francesco Visin (Deepmind)
Simon Osindero (DeepMind)

More from the Same Authors