Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

On Memorization and Privacy risks of Sharpness Aware Minimization

Young In Kim · Pratiksha Agrawal · Johannes Royset · RAJIV KHANNA


Abstract:

In many recent works, there is an increased focus on designing algorithms that seek wider optima for neural network loss optimization as there is empirical evidence that it leads to better generalization performance in many datasets. In this work, we dissect these performance gains through the lens of data memorization in overparameterized models. We define a new metric that helps us identify which data points specifically do algorithms seeking wider optima do better when compared to vanilla SGD. This insight helps us unearth data privacy risks associated with such algorithms, which we verify through exhaustive empirical evaluations. Finally, we propose mitigation strategies to achieve a more desirable accuracy vs privacy trade-off. The proposed metric and the insights are also applicable more generally when analyzing performance and risks of a novel optimization algorithm.

Chat is not available.