Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Explaining the Model, Protecting Your Data: Revealing and Mitigating the Data Privacy Risks of Post-Hoc Model Explanations via Membership Inference

Catherine Huang · Martin Pawelczyk · Himabindu Lakkaraju

Keywords: [ foundation models ] [ Deep Learning ] [ Interpretability ] [ Privacy ] [ Data privacy ] [ explainability ] [ differential privacy ] [ Trustworthy ML ] [ Membership Inference Attacks ] [ Post-Hoc Explanations ] [ Adversarial ML ]


Abstract:

Predictive machine learning models are becoming increasingly deployed in high-stakes contexts involving sensitive personal data; in these contexts, there is a trade-off between model explainability and data privacy. In this work, we push the boundaries of this trade-off: with a focus on foundation models for image classification fine-tuning, we reveal unforeseen privacy risks of post-hoc model explanations and subsequently offer mitigation strategies for such risks. First, we construct VAR-LRT and L1/L2-LRT, two novel membership inference attacks based on feature attribution explanations that are significantly more successful than existing attacks, particularly in the low false-positive rate regime that allows an adversary to identify specific training set members with confidence. Second, we find empirically that optimized differentially private fine-tuning substantially diminishes the success of the aforementioned attacks, while maintaining high model accuracy. This analysis fills a gap in literature—there is no prior work that thoroughly quantifies the relationship between differential privacy and the subsequent privacy risks of post-hoc explanations in a deep learning setting. We carry out a rigorous empirical analysis with 2 novel attacks, 5 vision transformer architectures, 5 benchmark datasets, 4 state-of-the-art post-hoc explanation methods, and 4 privacy strength settings.

Chat is not available.