Timezone: »
The issue of disagreements amongst human experts is a ubiquitous one in both machine learning and medicine. In medicine, this often corresponds to doctor disagreements on a patient diagnosis. In this work, we show that machine learning models can be successfully trained to give uncertainty scores to data instances that result in high expert disagreements. In particular, they can identify patient cases that would benefit most from a medical second opinion. Our central methodological finding is that Direct Uncertainty Prediction (DUP), training a model to predict an uncertainty score directly from the raw patient features, works better than Uncertainty Via Classification, the two step process of training a classifier and postprocessing the output distribution to give an uncertainty score. We show this both with a theoretical result, and on extensive evaluations on a large scale medical imaging application.
Author Information
Maithra Raghu (Cornell University / Google Brain)
Katy Blumer (Google)
Rory sayres (Google)
Ziad Obermeyer (UC Berkeley School of Public Health)
Bobby Kleinberg (Cornell)
Sendhil Mullainathan (Harvard University)
Jon Kleinberg (Cornell University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Direct Uncertainty Prediction for Medical Second Opinions »
Wed. Jun 12th 07:00 -- 07:05 PM Room Room 201
More from the Same Authors
-
2023 Poster: Underspecification Presents Challenges for Credibility in Modern Machine Learning »
Alexander D'Amour · Katherine Heller · Dan Moldovan · Ben Adlam · Babak Alipanahi · Alex Beutel · Christina Chen · Jonathan Deaton · Jacob Eisenstein · Matthew Hoffman · Farhad Hormozdiari · Neil Houlsby · Shaobo Hou · Ghassen Jerfel · Alan Karthikesalingam · Mario Lucic · Yian Ma · Cory McLean · Diana Mincu · Akinori Mitani · Andrea Montanari · Zachary Nado · Vivek Natarajan · Christopher Nielson · Thomas F. Osborne · Rajiv Raman · Kim Ramasamy · Rory sayres · Jessica Schrouff · Martin Seneviratne · Shannon Sequeira · Harini Suresh · Victor Veitch · Maksym Vladymyrov · Xuezhi Wang · Kellie Webster · Steve Yadlowsky · Taedong Yun · Xiaohua Zhai · D. Sculley -
2022 : How Neural Networks See, Learn and Forget »
Maithra Raghu -
2022 Workshop: Knowledge Retrieval and Language Models »
Maithra Raghu · Urvashi Khandelwal · Chiyuan Zhang · Matei Zaharia · Alexander Rush -
2019 Workshop: Identifying and Understanding Deep Learning Phenomena »
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao -
2018 Poster: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) »
Been Kim · Martin Wattenberg · Justin Gilmer · Carrie Cai · James Wexler · Fernanda Viégas · Rory sayres -
2018 Oral: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) »
Been Kim · Martin Wattenberg · Justin Gilmer · Carrie Cai · James Wexler · Fernanda Viégas · Rory sayres -
2018 Poster: Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games? »
Maithra Raghu · Alexander Irpan · Jacob Andreas · Bobby Kleinberg · Quoc Le · Jon Kleinberg -
2018 Oral: Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games? »
Maithra Raghu · Alexander Irpan · Jacob Andreas · Bobby Kleinberg · Quoc Le · Jon Kleinberg -
2018 Poster: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2018 Oral: An Alternative View: When Does SGD Escape Local Minima? »
Bobby Kleinberg · Yuanzhi Li · Yang Yuan -
2017 Poster: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein -
2017 Talk: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein