Skip to yearly menu bar Skip to main content


Poster

Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining

Aaron Li · Robin Netzorg · Zhihan Cheng · Zhuoqin Zhang · Bin Yu

Hall C 4-9 #2506
[ ] [ Paper PDF ]
[ Slides
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the Prototypical Part Network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this architecture is able to produce visually interpretable classifications, it often learns to classify based on parts of the image that are not semantically meaningful. To address this problem, we propose the Reward Reweighing, Reselecting, and Retraining (R3) post-processing framework, which performs three additional corrective updates to a pretrained ProtoPNet in an offline and efficient manner. The first two steps involve learning a reward model based on collected human feedback and then aligning the prototypes with human preferences. The final step is retraining, which realigns the base features and the classifier layer of the original model with the updated prototypes. We find that our R3 framework consistently improves both the interpretability and the predictive accuracy of ProtoPNet and its variants.

Chat is not available.