Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

A Skew-Sensitive Evaluation Framework for Imbalanced Data Classification

Min Du · Nesime Tatbul · Brian Rivers · Akhilesh Kumar Gupta · Lucas Hu · Wei Wang · Ryan Marcus · Shengtian Zhou · Insup Lee · Justin Gottschlich


Abstract:

Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task. Metrics such as Balanced Accuracy are commonly used to evaluate a classifier’s prediction performance under such scenarios. However, these metrics fall short when classes vary in importance. In this paper, we propose a simple and general-purpose evaluation framework for imbalanced data classification that is sensitive to arbitrary skews in class cardinalities and importances. Experiments with several state-of-the-art classifiers tested on real-world datasets from three different domains show the effectiveness of our framework – not only in evaluating and ranking classifiers, but also training them.

Chat is not available.