Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Generalizing Neural Additive Models via Statistical Multimodal Analysis

Young Kyung Kim · Juan Di Martino · Guillermo Sapiro

Keywords: [ Explainable AI ] [ Interpretability ] [ Additive Models ] [ Multimodal Learning ]


Abstract:

Generalized Additive Models (GAM) and Neural Additive Models (NAM) have gained a lot of attention for addressing trade-offs between accuracy and interpretability of machine learning models. Although the field has focused on minimizing trade-offs between accuracy and interpretability, the limitation of GAM or NAM on data that has multiple subpopulations, differentiated by latent variables with distinctive relationships between features and outputs, has rarely been addressed. The main reason behind this limitation is that these models collapse multiple relationships by being forced to fit the data in a unimodal fashion. Here, we address and describe the overlooked limitation of "one-fits-all" interpretable methods and propose a Mixture of Neural Additive Models (MNAM) to overcome it. The proposed MNAM learns relationships between features and outputs in a multimodal fashion and assigns a probability to each mode. Based on a subpopulation, MNAM will activate one or more matching modes by increasing their probability. Thus, the objective of MNAM is to learn multiple relationships and activate the right relationships by automatically identifying subpopulations of interest. Similar to how GAM and NAM have fixed relationships between features and outputs, MNAM will maintain interpretability by having multiple fixed relationships. We demonstrate how the proposed MNAM balances between rich representations and interpretability with numerous empirical observations and pedagogical studies. The code is available at (to be completed upon acceptance).

Chat is not available.