Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. In addition, it is essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions. In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, public health, computational biology, biomedical informatics, and clinical fields to facilitate discussions including related challenges, definition, formalisms, and evaluation protocols regarding interpretable medical machine intelligence. The workshop appeals to ICML audiences as interpretability is a major challenge to deploy ML in critical domains such as healthcare. By providing a platform that fosters potential collaborations and discussions between attendees, we hope the workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.
Sat 6:15 a.m. - 6:30 a.m.
|
Welcoming remarks and introduction
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 6:30 a.m. - 7:00 a.m.
|
Invited talk #1 Cynthia Rudin (Title: Almost Matching Exactly for Interpretable Causal Inference)
(
Presentation
)
SlidesLive Video » I will present a matching framework for causal inference in the potential outcomes setting called Almost Matching Exactly. This framework has several important elements: (1) Its algorithms create matched groups that are interpretable. The goal is to match treatment and control units on as many covariates as possible, or "almost exactly." (2) Its algorithms create accurate estimates of individual treatment effects. This is because we use machine learning on a separate training set to learn which features are important for matching. The key constraint is that units are always matched on a set of covariates that together can predict the outcome well. (3) Our methods are fast and scalable. In summary, these methods rival black box machine learning methods in their estimation accuracy but have the benefit of being interpretable and easier to troubleshoot. Our lab website is here: https://almost-matching-exactly.github.io |
Cynthia Rudin 🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Invited talk #2 James Zou (Title: Machine learning to make clinical trials more efficient and diverse)
(
Presentation
)
SlidesLive Video » |
James Zou 🔗 |
Sat 7:30 a.m. - 7:40 a.m.
|
Poster spotlight #1
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 7:40 a.m. - 8:30 a.m.
|
Posters I and coffee break
(
Posters session and coffee break
)
|
🔗 |
Sat 8:30 a.m. - 9:00 a.m.
|
Invited talk #3 Rich Caruana. Talk Title: Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning for Healthcare
(
Presentation
)
SlidesLive Video » |
Rich Caruana 🔗 |
Sat 9:00 a.m. - 9:30 a.m.
|
Invited talk #4 Been Kim (Title: How to stop worry about interpretability, and start making progress)
(
Presentation
)
SlidesLive Video » |
Been Kim 🔗 |
Sat 9:30 a.m. - 10:30 a.m.
|
Lunch break
|
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Invited talk #5 Elliot K Fishman, M.D. Title: The Early Detection of Pancreatic Cancer: The Role of AI
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Invited talk #6 Alan Yuille (Title: The Felix Project: Deep Networks To Detect Pancreatic Neoplasms)
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 11:30 a.m. - 11:50 a.m.
|
Poster spotlight #2
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 11:50 a.m. - 12:00 p.m.
|
Coffee Break
|
🔗 |
Sat 12:00 p.m. - 12:30 p.m.
|
Invited talk #7 Noa Dagan, M.D. and Noam Barda, M.D. Title: Model explainability - the perspective of implementing prediction models for patient care in a large healthcare organization)
(
Presentation
)
SlidesLive Video » |
Noa Dagan 🔗 |
Sat 12:30 p.m. - 1:00 p.m.
|
Invited talk #8 Atlas Wang. Title: “Free Knowledge” in Chest X-rays: Contrastive Learning of Images and Their Radiomics
(
Presentation
)
SlidesLive Video » |
Zhangyang “Atlas” Wang 🔗 |
Sat 1:00 p.m. - 1:10 p.m.
|
Poster spotlight #3
(
Presentation
)
SlidesLive Video » |
🔗 |
Sat 1:10 p.m. - 1:15 p.m.
|
Closing remarks
(
Presentation
)
|
🔗 |
Sat 1:15 p.m. - 2:30 p.m.
|
Posters II and coffee break
(
Posters session and coffee break
)
|
🔗 |