Abstract: Modern machine learning models are often overparametrized: they are rich enough to perfectly interpolate the training data, even if these are pure noise. The optimization landscape of such problems is still poorly understood: while it is expected that sufficiently overparametrized systems are easy to optimize, it has proven challenging to formalize this intuition. I will introduce a simple toy model for these optimization landscapes, and present characterizations of several algorithms for this model. I will then discuss differences and analogies between this model and more realistic neural networks.