Timezone: »

 
TabCBM: Concept-based Interpretable Neural Networks for Tabular Data
Mateo Espinosa Zarlenga · Zohreh Shams · Michael Nelson · Been Kim · Mateja Jamnik
Event URL: https://openreview.net/forum?id=2YG1aTEaj4 »

Concept-based interpretability addresses a deep neural network's opacity by constructing explanations for its predictions using high-level units of information referred to as concepts. Research in this area, however, has been mainly focused on image and graph-structured data, leaving high-stakes medical and genomic tasks whose data is tabular out of reach of existing methods. In this paper, we address this gap by introducing the first definition of what a high-level concept may entail in tabular data. We use this definition to propose Tabular Concept Bottleneck Models (TabCBMs), a family of interpretable self-explaining neural architectures capable of learning high-level concept explanations for tabular tasks without concept annotations. We evaluate our method in synthetic and real-world tabular tasks and show that it outperforms or performs competitively against state-of-the-art methods while providing a high level of interpretability as measured by its ability to discover known high-level concepts. Finally, we show that TabCBM can discover important high-level concepts in synthetic datasets inspired by critical tabular tasks (e.g., single-cell RNAseq) and allows for human-in-the-loop concept interventions in which an expert can correct mispredicted concepts to boost the model's performance.

Author Information

Mateo Espinosa Zarlenga (University of Cambridge)
Zohreh Shams (Babylon Health)
Michael Nelson (University of Cambridge)
Been Kim (Google Brain)
Mateja Jamnik (University of Cambridge)

More from the Same Authors