Skip to yearly menu bar Skip to main content


Poster

Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics

Ankit Vani · Frederick Tung · Gabriel Oliveira · Hossein Sharifi-Noghabi

Hall C 4-9 #1114
[ ] [ Project Page ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Despite attaining high empirical generalization, the sharpness of models trained with sharpness-aware minimization (SAM) do not always correlate with generalization error. Instead of viewing SAM as minimizing sharpness to improve generalization, our paper considers a new perspective based on SAM's training dynamics. We propose that perturbations in SAM perform perturbed forgetting, where they discard undesirable model biases to exhibit learning signals that generalize better. We relate our notion of forgetting to the information bottleneck principle, use it to explain observations like the better generalization of smaller perturbation batches, and show that perturbed forgetting can exhibit a stronger correlation with generalization than flatness. While standard SAM targets model biases exposed by the steepest ascent directions, we propose a new perturbation that targets biases exposed through the model's outputs. Our output bias forgetting perturbations outperform standard SAM, GSAM, and ASAM on ImageNet, robustness benchmarks, and transfer to CIFAR-10,100, while sometimes converging to sharper regions. Our results suggest that the benefits of SAM can be explained by alternative mechanistic principles that do not require flatness of the loss surface.

Live content is unavailable. Log in and register to view live content