Poster
in
Workshop: Principles of Distribution Shift (PODS)
Dynamics of Dataset Bias and Robustness
Prabhu Pradhan · Ruchit Rawal
We aim to shine a light on the effects of various techniques for improving robustness under distribution-shift on the dataset-bias (i.e. class imbalance). This relationship between data-skewness and such performance-enhancing measures remains largely unexplored. Deep learning models are seeing real-world deployment, hence it's crucial to gauge the reliability of such neural networks since undetected (side)effects of robustness enhancement on dataset bias could be catastrophic. We observe that robustness-enhancement techniques affect performance on under-represented (yet critical) classes, thus requiring investigation from a fairness perspective. We evaluate methods for model robustness on distinct architectures by their effects on dataset bias through a variety of specialized metrics (imbalance-focused; F-1 score/balanced accuracy) on artificially imbalanced datasets.