Timezone: »

 
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen · Jeffrey Li · Joon Kim · Gregory Plumb · Ameet Talwalkar

Despite years of progress in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals stated as consumers' use cases. To address this gap, we argue for the IML community to embrace a diagnostic vision for the field. Instead of viewing IML methods as a panacea for a variety of overly broad use cases, we emphasize the need to systematically connect IML methods to narrower --yet better defined-- target use cases. To formalize this vision, we propose a taxonomy including both methods and use cases, helping to conceptualize the current gaps between the two. Then, to connect these two sides, we describe a three-step workflow to enable researchers and consumers to define and validate IML methods as useful diagnostics. Eventually, by applying this workflow, a more complete version of the taxonomy will allow consumers to find relevant methods for their target use cases and researchers to identify motivating use cases for their proposed methods.

Author Information

Valerie Chen (Carnegie Mellon University)
Jeffrey Li (University of Washington)
Joon Kim (Carnegie Mellon University)
Gregory Plumb (Carnegie Mellon University)
Ameet Talwalkar (Carnegie Mellon University)

More from the Same Authors