Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Label-Imbalanced and Group-Sensitive Classification under Overparameterization

Ganesh Ramachandra Kini · Orestis Paraskevas · Samet Oymak · Christos Thrampoulidis


Abstract:

The goal in label-imbalanced and group-sensitive classification is to optimize metrics such as balanced error and equal opportunity. Classical methods like re-weighted cross-entropy, are known to fail when used with the modern practice of training deep nets to the terminal phase of training (TPT), that is training beyond zero training error. In contrast to previous heuristics, we follow a principled analysis explaining how different loss adjustments affect margins. First, we prove that for linear classifiers trained in TPT, it is necessary to introduce multiplicative, rather than additive, logit adjustments so that the relative margins between classes change appropriately. To show this, we discover a connection of the multiplicative CE modification to the cost-sensitive support-vector machines. While additive adjustments are ineffective deep in the TPT, we show numerically that they can speed up convergence by countering an initial negative effect of the multiplicative weights. Motivated by these findings, we formulate the vector-scaling (VS) loss, that captures existing techniques as special cases. For Gaussian-mixtures data, we perform a generalization analysis, revealing tradeoffs between balanced / standard error and equal opportunity.