Poster
in
Workshop: 2nd Workshop on Generative AI and Law (GenLaw ’24)
Evaluations of Machine Learning Privacy Defenses are Misleading
Michael Aerni · Jie Zhang · Florian Tramer
Abstract:
Existing evaluations of empirical privacy defenses fail to characterize the privacy leakage of the most vulnerable samples, use weak attacks, and avoid comparisons with practical differential privacy baselines. We propose a stronger evaluation protocol that avoids those issues, and find that a properly tuned, high-utility DP-SGD baseline with vacuous provable guarantees outperforms many heuristic defenses in the literature.
Chat is not available.