Flat Minima and Generalization: Insights from Stochastic Convex Optimization
Matan Schliserman ⋅ Shira Vansover-Hager ⋅ Tomer Koren
Abstract
Understanding the generalization behavior of learning algorithms is a central goal of learning theory. A recently emerging explanation is that learning algorithms are successful in practice because they converge to flat minima, which have been consistently associated with improved generalization performance. In this work, we study the link between flat minima and generalization in the canonical setting of stochastic convex optimization with a non-negative, $\beta$-smooth objective. Our first finding is that, even in this fundamental setting, flat empirical minima may incur trivial $\Omega(1)$ population risk while sharp minima generalizes optimally. We then demonstrate that this phenomenon extends to sharpness-aware algorithms introduced by Foret et al. (2021), namely Sharpness-Aware Gradient Descent (SA-GD) and Sharpness-Aware Minimization (SAM). For SA-GD we prove that it successfully converges to a flat minimum at a fast rate, but the population risk of the solution can still be as large as $\Omega(1)$. For SAM we show that although it minimizes the empirical loss, it may converge to a sharp minimum and also incur population risk $\Omega(1)$. Finally, we establish population risk upper bounds for both SA-GD and SAM using algorithmic stability techniques.
Successful Page Load