Workshop: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers
Invited Talk 7: Adrian Weller & Umang Bhatt - Challenges in Deploying Explainable Machine Learning
Explainable machine learning offers the potential to provide stakeholders with insights into model behavior, yet there is little understanding of how organizations use these methods in practice. In this talk, we discuss recent research exploring how organizations view and use explainability. We find that the majority of deployments are not for end-users but rather for machine learning engineers, who use explainability to debug the model. There is thus a gap between explainability in practice and the goal of external transparency since explanations are primarily serving internal stakeholders. Providing useful external explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we report findings from a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in the service of external transparency goals.