Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Prediction without Preclusion: Recourse Verification with Reachable Sets

Avni Kothari · Bogdan Kulynych · Lily Weng · Berk Ustun


Abstract:

Machine learning models are now used to decide who will receive a loan, a job interview, or a public service. Standard techniques to build these models use features that characterize people but overlook their \emph{actionability}. In domains like lending and hiring, models can assign predictions that are fixed – meaning that consumers denied loans and interviews are precluded from access to credit and employment. In this work, we introduce a formal testing procedure to flag models that assign fixed predictions called recourse verification. We develop machinery to reliably test the feasibility of recourse for any model under user-specified actionability constraints. We demonstrate how these tools can ensure recourse and adversarial robustness and use them to study the infeasibility of recourse in real-world lending datasets. Our results highlight how models can inadvertently assign fixed predictions that preclude access and motivate the need to design algorithms that account for actionability when developing models and providing recourse.

Chat is not available.