Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Reframing the Brain Age Prediction Problem to a More Interpretable and Quantitative Approach

Neha Gianchandani · Mahsa Dibaji · Mariana Bento · Ethan MacDonald · Roberto Souza

Keywords: [ Grad-CAM ] [ Interpretability ] [ Brain Age Prediction ] [ Voxel-level brain age ]


Abstract:

Deep learning models have achieved state-of-the-art results in estimating brain age, which is an important brain health biomarker, from magnetic resonance (MR) images. However, most of these models only provide a global age prediction, and rely on techniques, such as saliency maps, to interpret their results. These saliency maps highlight regions in the input image that were significant for the model's predictions, but they are hard to be interpreted, and saliency map values are not directly comparable across different samples. In this work, we reframe the age prediction problem from MR images to an image-to-image regression problem where we estimate the brain age for each brain voxel in MR images. We compare voxel-wise age prediction models against global age prediction models and their corresponding saliency maps. Our preliminary results indicate that voxel-wise age prediction models are more interpretable, since they provide spatial information about the brain aging process, and they benefit from being quantitative.

Chat is not available.