Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Interpreting Differentiable Latent States for Healthcare Time-series Data

Yu Chen · Nivedita Bijlani · Samaneh Kouchaki · Payam Barnaghi

Keywords: [ Machine Learning ] [ latent state ] [ digital healthcare ] [ Interpretability ]


Abstract:

Machine learning enables extracting clinical insights from large temporal datasets. The applications of such machine learning models include identifying disease patterns and predicting patient outcomes. However, limited interpretability poses challenges for deploying advanced machine learning in digital healthcare. Understanding the meaning of latent states is crucial for interpreting machine learning models, assuming they capture underlying patterns. In this paper, we present a concise algorithm that allows for i) interpreting latent states using highly related input features; ii) interpreting predictions using subsets of input features via latent states; and iii) interpreting changes in latent states over time. The proposed algorithm is feasible for any model that is differentiable. We demonstrate that this approach enables the identification of a daytime behavioral pattern for predicting nocturnal behavior in a real-world healthcare dataset.

Chat is not available.