Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Accurate, Explainable, and Private Models: Providing Recourse While Minimizing Training Data Leakage

Keywords: [ explainability ] [ adversarial machine learning ] [ algorithmic recourse ] [ Privacy ]


Abstract: Machine learning models are increasingly utilized across impactful domains to predict individual outcomes. As such, many models provide algorithmic recourse to individuals who receive negative outcomes. However, recourse can be leveraged by adversaries to disclose private information. This work presents the first attempt at mitigating such attacks. We present two novel methods to generate differentially private recourse: Differentially Private Model ($\texttt{DPM}$) and Laplace Recourse ($\texttt{LR}$). Using logistic regression classifiers and real world and synthetic datasets, we find that $\texttt{DPM}$ and $\texttt{LR}$ perform well in reducing what an adversary can infer, especially at low $\texttt{FPR}$. When training dataset size is large enough, we find particular success in preventing privacy leakage while maintaining model and recourse accuracy with our novel $\texttt{LR}$ method.

Chat is not available.