Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Interpretable Alzheimer’s Disease Classification Via a Contrastive Diffusion Autoencoder.

Ayodeji Ijishakin · Ahmed Abdulaal · Adamos Hadjivasiliou · Sophie Martin · James Cole

Keywords: [ MRI ] [ Prototype learning ] [ Alzheimer's Disease Classification ] [ Generative modelling ] [ Interpretability ]


Abstract:

In visual object classification, humans often justify their choices by comparing objects to prototypical examples within that class. We may therefore increase the interpretability of deep learning models by imbuing them with a similar style of reasoning. In this work, we apply this principle by classifying Alzheimer’s Disease based on the similarity of images to training examples within the latent space. We use a contrastive loss combined with a diffusion autoencoder backbone, to produce a semantically meaningful latent space, such that neighbouring latents have similar image-level features. We achieve a classification accuracy comparable to black box approaches on a dataset of 2D MRI images, whilst producing human interpretable model explanations. Therefore, this work stands as a contribution to the pertinent development of accurate and interpretable deep learning within medical imaging.

Chat is not available.