Skip to yearly menu bar Skip to main content


Workshop

Interpretable Machine Learning in Healthcare

Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of existing ML approaches inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could facilitate the deployment ultimately. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more coherent with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.

In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definition, formalisms, evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will also introduce possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the proposed workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.

Chat is not available.
Timezone: America/Los_Angeles

Schedule