Skip to yearly menu bar Skip to main content


Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Prediction without Preclusion Recourse Verification with Reachable Sets

Avni Kothari · Bogdan Kulynych · Lily Weng · Berk Ustun


Abstract:

Machine learning models are often used to decide who will receive a loan, a job interview, or a public service. Standard techniques to build these models use features that characterize people but overlook their \emph{actionability}. In domains like lending and hiring, models can assign predictions that are \emph{fixed}—-meaning that consumers who are denied loans and interviews are permanently locked out from access to credit and employment. In this work, we introduce a formal testing procedure to flag models that assign these ``predictions without recourse," called \emph{recourse verification}. We develop machinery to reliably test the feasibility of recourse \emph{for any model} given user-specified actionability constraints. We demonstrate how these tools can ensure recourse and adversarial robustness in real-world datasets and use them to study the infeasibility of recourse in real-world lending datasets. Our results highlight how models can inadvertently assign fixed predictions that permanently bar access and the need to design algorithms that account for actionability when developing models and providing recourse.

Chat is not available.