Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

TabCBM: Concept-based Interpretable Neural Networks for Tabular Data

Mateo Espinosa Zarlenga · Zohreh Shams · Michael Nelson · Been Kim · Mateja Jamnik

Keywords: [ Tabular ] [ Concepts ] [ Interpretability ] [ explainability ] [ Explainable AI ] [ feature selection ]


Abstract:

Concept-based interpretability addresses a deep neural network's opacity by constructing explanations for its predictions using high-level units of information referred to as concepts. Research in this area, however, has been mainly focused on image and graph-structured data, leaving high-stakes medical and genomic tasks whose data is tabular out of reach of existing methods. In this paper, we address this gap by introducing the first definition of what a high-level concept may entail in tabular data. We use this definition to propose Tabular Concept Bottleneck Models (TabCBMs), a family of interpretable self-explaining neural architectures capable of learning high-level concept explanations for tabular tasks without concept annotations. We evaluate our method in synthetic and real-world tabular tasks and show that it outperforms or performs competitively against state-of-the-art methods while providing a high level of interpretability as measured by its ability to discover known high-level concepts. Finally, we show that TabCBM can discover important high-level concepts in synthetic datasets inspired by critical tabular tasks (e.g., single-cell RNAseq) and allows for human-in-the-loop concept interventions in which an expert can correct mispredicted concepts to boost the model's performance.

Chat is not available.