Skip to yearly menu bar Skip to main content


Poster

DeepCoDA: personalized interpretability for compositional health data

Thomas Quinn · Dang Nguyen · Santu Rana · Sunil Gupta · Svetha Venkatesh

Keywords: [ Architectures ] [ Computational Biology and Genomics ] [ Healthcare ] [ Applications - Neuroscience, Cognitive Science, Biology and Health ]


Abstract:

Abstract Interpretability allows the domain-expert to directly evaluate the model's relevance and reliability, a practice that offers assurance and builds trust. In the healthcare setting, interpretable models should implicate relevant biological mechanisms independent of technical factors like data pre-processing. We define personalized interpretability as a measure of sample-specific feature attribution, and view it as a minimum requirement for a precision health model to justify its conclusions. Some health data, especially those generated by high-throughput sequencing experiments, have nuances that compromise precision health models and their interpretation. These data are compositional, meaning that each feature is conditionally dependent on all other features. We propose the Deep Compositional Data Analysis (DeepCoDA) framework to extend precision health modelling to high-dimensional compositional data, and to provide personalized interpretability through patient-specific weights. Our architecture maintains state-of-the-art performance across 25 real-world data sets, all while producing interpretations that are both personalized and fully coherent for compositional data.

Chat is not available.