Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

On the Calibration of Conditional-Value-at-Risk

Rajeev Verma · Volker Fischer · Eric Nalisnick

Keywords: [ fat-tails ] [ refinement ] [ CVaR ] [ tail-risk measures ] [ calibration ]


Abstract:

To promote risk-averse behaviour in safety critical AI applications, Conditional-Value-at-Risk (CVaR)---a spectral risk measure---is largely being employed as a loss aggregation function of choice. We study the calibration and the refinement property of CVaR, by providing an extension of the classical proper scoring risk decomposition for CVaR. Our result suggests a trade-off: CVaR provides tail-sensitive calibration and refinement property, however this is at the cost of calibration and refinement for non-tail events. Our result calls to consider the inherent cost-benefit analysis to employ CVaR as a risk measure of choice for AI Safety.

Chat is not available.