Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction
Human-in-the-Loop Out-of-Distribution Detection with False Positive Rate Control
Harit Vishwakarma · Heguang Lin · Ramya Vinayak
Abstract:
Robustness to Out-of-Distribution (OOD) samples is essential for the successful deployment of machine learning models in the open world. Since it is not possible to have a priori access to a variety of OOD data before deployment, several recent works have focused on designing scoring functions to quantify OOD uncertainty. These methods often find a threshold that achieves $95\%$ true positive rate (TPR) on the In-Distribution (ID) data used for training and uses this threshold for detecting OOD samples. However, this can lead to very high FPR as seen in a comprehensive evaluation in the Open-OOD benchmark, the FPR can range between 60 to 96\% on several ID and OOD dataset combinations. In contrast, practical systems deal with a variety of OOD samples on the fly and critical applications, e.g., medical diagnosis, demanding guaranteed control of the false positive rate (FPR). To meet these challenges, we propose a mathematically grounded framework for human-in-the-loop OOD detection, wherein expert feedback is used to update the threshold. This allows the system to adapt to variations in the OOD data while adhering to the quality constraints. We propose an algorithm that uses any time-valid confidence intervals based on the Law of Iterated Logarithm (LIL). Our theoretical results show that the system meets FPR constraints while minimizing the human feedback for points that are in-distribution. Another key feature of the system is that it can work with any existing post-hoc OOD uncertainty-quantification methods. We evaluate our system empirically on a mixture of benchmark OOD datasets in image classification tasks on CIFAR-10 and CIFAR-100 as in distribution datasets and show that our method can maintain FPR at most $5\%$ while maximizing TPR.
Chat is not available.