Keywords: [ Deep Learning Theory ]
In this work, we present an analytical solution for a simple model of classification problems on structured data. Using methods from statistical physics, we obtain a precise asymptotic expression for the test errors of random feature models trained on a strong and weak features - a model of data with input data covariance built from independent blocks allowing us to tune the saliency of low-dimensional structures and their alignment with respect to the target function. Leveraging our analytical result, we explore how properties of data distributions impact generalization in the over-parametrized regime and compare results for the logistic and square loss. Our results show in particular that the logistic loss benefits more robustly from structured data than the squared loss. Numerical experiments on MNIST and CIFAR10 confirm this insight.