Skip to yearly menu bar Skip to main content


Poster

On the Adversarial Robustness of Causal Algorithmic Recourse

Ricardo Dominguez-Olmedo · Amir Karimi · Bernhard Schölkopf

Hall E #903

Keywords: [ SA: Trustworthy Machine Learning ] [ MISC: Causality ] [ SA: Accountability, Transparency and Interpretability ]


Abstract:

Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems. Recourse recommendations should ideally be robust to reasonably small uncertainty in the features of the individual seeking recourse. In this work, we formulate the adversarially robust recourse problem and show that recourse methods that offer minimally costly recourse fail to be robust. We then present methods for generating adversarially robust recourse for linear and for differentiable classifiers. Finally, we show that regularizing the decision-making classifier to behave locally linearly and to rely more strongly on actionable features facilitates the existence of adversarially robust recourse.

Chat is not available.