Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Human-Machine Collaboration and Teaming

Machine Explanations and Human Understanding

Chacha Chen


Abstract:

Explanations are hypothesized to improve human understanding of machine learning models. However, empirical studies have found mixed and even negative results. It remains an open question what factors drive such mixed results. To address this question, we first conduct a literature survey and identify three core concepts that cover all existing quantitative measures of understanding: task decision boundary, model decision boundary, and model error. We argue that human intuitions are necessary for generating and evaluating explanations in human-AI decision making: without assumptions about human intuitions, explanations may potentially improve human understanding of model decision boundary, but they cannot improve human understanding of task decision boundary or model error (see formal discussions in the appendix). We further validate the importance of human intuitions in shaping the outcome of machine explanations with empirical human subject studies.

Chat is not available.