Skip to yearly menu bar Skip to main content


Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Semi-supervised Concept Bottleneck Models

Jeeon Bae · Sungbin Shin · Namhoon Lee


Abstract:

Concept bottleneck models (CBMs) enhance the interpretability of deep neural networks by adding a concept layer between the input and output layers.However, this improvement comes at the cost of labeling concepts, which can be prohibitively expensive. To tackle this issue, we develop a semi-supervised learning (SSL) approach to CBMs that can make accurate predictions given only a handful of concept annotations.Our approach incorporates a strategy for effectively regulating erroneous pseudo-labels within the standard SSL approaches.We conduct experiments on a range of labeling scenarios and present that our approach can reduce the labeling cost quite significantly without sacrificing the prediction performance.

Chat is not available.