Timezone: »

 
Poster
Uniform Convergence of Rank-weighted Learning
Justin Khim · Liu Leqi · Adarsh Prasad · Pradeep Ravikumar

Thu Jul 16 07:00 AM -- 07:45 AM & Thu Jul 16 06:00 PM -- 06:45 PM (PDT) @

The decision-theoretic foundations of classical machine learning models have largely focused on estimating model parameters that minimize the expectation of a given loss function. However, as machine learning models are deployed in varied contexts, such as in high-stakes decision-making and societal settings, it is clear that these models are not just evaluated by their average performances. In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning. These L-Risks, induced by rank-dependent weighting functions with bounded variation, is a unification of popular risk measures such as conditional value-at-risk and those defined by cumulative prospect theory. We give uniform convergence bounds of this broad class of risk measures and study their consequences on a logistic regression example.

Author Information

Justin Khim (Carnegie Mellon University)
Liu Leqi (Carnegie Mellon University)
Adarsh Prasad (Carnegie Mellon University)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors