A Strictly Proper Scoring Rule and a Calibration Metric for Interval-Censored Data Analysis
Hiroki Yanagisawa ⋅ Shunta Akiyama
Abstract
Interval-censored data present unique challenges in statistical analysis due to the partial observability of event times within known intervals, requiring assumptions about the censoring mechanism. This paper explores the theoretical relationship between two foundational assumptions: independent monitoring and non-informative censoring. We demonstrate that these assumptions are equivalent for Case-1 interval-censored data, but not for Case-$K$ interval-censored data, where $K \geq 2$, through a synthetic dataset example. Additionally, we propose the first strictly proper scoring rule and calibration metric specifically designed for interval-censored data under the constant-sum assumption and the non-informative censoring assumption, respectively. Our empirical evaluations on real-world datasets show that a neural network model trained with our scoring rule is competitive with established statistical baselines, offering enhanced flexibility. These contributions provide significant advancements in the theoretical understanding and practical analysis of interval-censored data.
Successful Page Load