Timezone: »

 
Workshop
3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)
Weina Jin · Ramin Zabih · S. Kevin Zhou · Yuyin Zhou · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang · Yuzhe Yang · Agni Kumar

Fri Jul 28 12:00 PM -- 08:00 PM (PDT) @ Ballroom C
Event URL: https://sites.google.com/view/imlh2023/home?authuser=1 »

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. In addition, it is essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, public health, computational biology, biomedical informatics, and clinical fields to facilitate discussions including related challenges, definition, formalisms, and evaluation protocols regarding interpretable medical machine intelligence. Our workshop will be in a large-attendance talk format. The expected number of attendees is about 150. The workshop appeals to ICML audiences as interpretability is a major challenge to deploy ML in critical domains such as healthcare. By providing a platform that fosters potential collaborations and discussions between attendees, we hope the workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.

Author Information

Weina Jin (Simon Fraser University)
Ramin Zabih (Cornell University)
S. Kevin Zhou (Institute of Computing Technology, Chinese Academy of Sciences)
Yuyin Zhou (Johns Hopkins University)
Xiaoxiao Li (University of British Columbia)
Yifan Peng (Weill Cornell Medicine)
Zongwei Zhou (Johns Hopkins University)
Yucheng Tang (Vanderbilt University)
Yuzhe Yang (MIT)
Agni Kumar (Apple)
Agni Kumar

Agni Kumar is a Research Scientist on Apple’s Health AI team. She studied at MIT, graduating with an M.Eng. in Machine Learning and B.S. degrees in Mathematics and Computer Science. Her thesis on modeling the spread of healthcare-associated infections led to joining projects at Apple with applied health focuses, specifically on understanding cognitive decline from device usage data and discerning respiratory rate from wearable microphone audio. She has published hierarchical reinforcement learning research and predictive analytics work in conferences and journals, including EMBC, PLOS Computational Biology and Telehealth and Medicine Today. She was a workshop organizer for ICML’s first-ever *Computational Approaches to Mental Health* workshop in 2021. She has also volunteered at WiML workshops and served as a reviewer for NeurIPS. For joy, Agni leads an Apple-wide global diversity network about encouraging mindfulness to find pockets of peace each day.

More from the Same Authors