Tailoring Strictly Proper Scoring Rules for Downstream Tasks: An Application to Causal Inference
Roman Plaud ⋅ Alexandre Perez-Lebel ⋅ Antoine Saillenfest ⋅ Thomas Bonald ⋅ Marine Le Morvan ⋅ Gael Varoquaux ⋅ Matthieu Labeau
Abstract
Probabilistic models are typically trained using task-agnostic objectives like log-loss, which can lead to significant errors in downstream estimation. This disconnect is especially critical in Inverse Probability Weighting (IPW) for causal inference, where propensity score errors near $0$ and $1$ often lead to high bias and variance. We propose a principled framework for deriving task-specific strictly proper scoring rules by matching the local curvature of the downstream error metric. We apply this to the Average Treatment Effect (ATE) estimation, deriving a closed-form loss and its corresponding canonical probability mapping that can be readily integrated with any model like a neural network or a gradient boosting algorithm. Extensive evaluations on causal inference benchmarks demonstrate that our tailored objective consistently outperforms standard likelihood-based and covariate-balancing approaches.
Successful Page Load