Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Responsible Decision Making in Dynamic Environments

Machine Learning Explainability & Fairness: Insights from Consumer Lending

Sormeh Yazdi · Laura Blattner · Duncan McElfresh · P-R Stark · Jann Spiess · Georgy Kalashnov


Abstract:

Stakeholders in consumer lending are debating whether lenders can responsibly use machine learning models in compliance with a range of pre-existing legal and regulatory requirements. Our work evaluates certain tools designed to help lenders and other model users understand and manage a range of machine learning models relevant to credit underwriting. Here, we focus on how certain explainability tools affect lenders’ ability to manage fairness concerns related to obligations to identify less discriminatory alternatives for models used to extend consumer credit. We evaluate these tools on a “usability” criterion that assesses whether and how well these tools en- able lenders to construct alternative models that are less discriminatory. Notably, we find that dropping features identified as drivers of disparities does not lead to less discriminatory alternative models, and often leads to substantial performance deterioration. In contrast, more automated tools that search for a range of less discriminatory alternative models can successfully improve fairness metrics. The findings presented here are extracted from a larger study that evaluates certain proprietary and open-source tools in the context of additional regulatory requirements.

Chat is not available.