Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

Data-Centric Defense: Shaping Loss Landscape with Augmentations to Counter Model Inversion

Si Chen · Feiyang Kang · Nikhil Abhyankar · Ming Jin · Ruoxi Jia


Abstract: Machine Learning models have shown susceptibility to various privacy attacks such as model inversion. Current defense techniques are mostly \emph{model-centric}, which are computationally expensive and often result in a significant privacy-utility tradeoff. This paper proposes a novel \emph{data-centric} approach to mitigate model inversion attacks which offers the unique advantage of enabling each individual user to control their data's privacy risk. We introduce several privacy-focused data augmentations which make it challenging for attackers to generate private target samples. We provide theoretical analysis and evaluate our approach against state-of-the-art model inversion attacks. Specifically, in standard face recognition benchmarks, we reduce face reconstruction success rates to $\leq1\%$, while maintaining high utility with only a 2\% classification accuracy drop, significantly surpassing state-of-the-art model-centric defenses. This is the first study to propose a data-centric approach for mitigating model inversion attacks, showing promising potential for decentralized privacy protection.

Chat is not available.