Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML Workshop on Human in the Loop Learning (HILL)

Interpretable Machine Learning: Moving From Mythos to Diagnostics

Valerie Chen · Jeffrey Li · Joon Kim · Gregory Plumb · Ameet Talwalkar


Abstract:

Despite years of progress in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals stated as consumers' use cases. To address this gap, we argue for the IML community to embrace a diagnostic vision for the field. Instead of viewing IML methods as a panacea for a variety of overly broad use cases, we emphasize the need to systematically connect IML methods to narrower --yet better defined-- target use cases. To formalize this vision, we propose a taxonomy including both methods and use cases, helping to conceptualize the current gaps between the two. Then, to connect these two sides, we describe a three-step workflow to enable researchers and consumers to define and validate IML methods as useful diagnostics. Eventually, by applying this workflow, a more complete version of the taxonomy will allow consumers to find relevant methods for their target use cases and researchers to identify motivating use cases for their proposed methods.

Chat is not available.