Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Explanation-guided dynamic feature selection for medical risk prediction

Nicasia Beebe-Wang · Wei Qiu · Su-In Lee

Keywords: [ dynamic feature selection ] [ imputation ] [ model explanations ] [ medical risk ] [ feature selection ]


Abstract:

In medical risk prediction scenarios, machine learning methods have demonstrated an ability to learn complex and predictive relationships among rich feature sets. However, in practice, when faced with new patients, we may not have access all information expected by a trained risk model. We propose a framework to simultaneously provide flexible risk estimates for samples with missing features, as well as context-dependent feature recommendations to identify what piece of information may be most valuable to collect next. Our approach uses a fixed prediction model, a local feature explainer, and ensembles of imputed samples to generate risk prediction intervals and feature recommendations. Applied to a myocardial infarction risk prediction task in the UK Biobank dataset, we find that our approach can more efficiently predict risk of a heart attack with fewer observed features than traditional fixed imputation and global feature selection methods.

Chat is not available.