Skip to yearly menu bar Skip to main content


Afternoon Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Uncertainty Fingerprints: Interpreting Model Decisions with Human Conceptual Hierarchies

Angie Boggust · Hendrik Strobelt · Arvind Satyanarayan


Abstract:

Understanding machine learning model uncertainty is essential to comprehend model behavior, ensure safe deployment, and intervene appropriately. However model confidences treat the output classes independently, ignoring relationships between classes that can reveal reasons for uncertainty, such as model confusion between related classes or an input with multiple valid labels. By leveraging human knowledge about related classes, we expand model confidence values into a hierarchy of concepts, creating an uncertainty fingerprint. An uncertainty fingerprint describes the model's confidence in every possible decision, distinguishing how the model proceeded from a broad idea to its precise prediction. Using hierarchical entropy, we compare fingerprints based on the model's decision-making process to categorize types of model uncertainty, identify common failure modes, and update dataset hierarchies.

Chat is not available.