Skip to yearly menu bar Skip to main content


invited talk
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Invited talk: Alex Lang - How to get over your Black Box trust issues?


Abstract:

Only the bravest machine learners have dared to tackle problems in medicine. Why? The most important reason is that the end users of ML models in medicine are skeptics of ML, and therefore one must jump through a multitude of hoops in order to deploy ML solutions. The common approach in the field is to focus on interpretability and force our ML solutions to be white box. However, this handcuffs the potential of our ML models from the start, and medicine is already a challenging enough space to model since data is hard to collect, the data one gets is always messy, and the tasks one must achieve in medicine are often not as intuitive as working on images or text.

Is there another way? Yes! Our approach is to embrace black box ML solutions, but deploy them carefully in clinical trials by rigorously controlling the risk exposure from trusting the ML solutions. I will use Alzheimer’s disease as an example to dive into our state of the art deep time series neural networks. Once I have explained our black box as best as a human reasonably can, I will detail how the outputs of the deep nets can be used in different clinical trials. In these applications, the end user prespecifies their risk tolerance, which leads to different context of use for the ML models. Our work demonstrates that we can embrace black box solutions by focusing on development rigorous deployment methods.

Chat is not available.