Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Gone With the Bits: Benchmarking Bias in Facial Phenotype Degradation Under Low-Rate Neural Compression

Tian Qiu · Arjun Nichani · Rasta Tadayon · Haewon Jeong

Keywords: [ bias ] [ Fairness ] [ neural compression ] [ Phenotype Classification ]


Abstract:

In this study, we investigate how facial phenotypes are distorted under neural image compression and the disparity of this distortion acrossracial groups. Neural compression methods are gaining popularity due to their impressive rate-distortion performance and their ability to compress to extremely small bitrates, below 0.1 bits per pixel (bpp). As deep learning architectures, these models are prone to bias during the training process, leading to unfair outcomes for individuals in different groups. We first demonstrate, by benchmarking 5 popular neural compression algorithms, that compressing facial images to low bitrate regimes leads to the degradation of specific phenotypes (e.g. eye type). Next, we highlight the bias in this phenotype degradation across different race groups. We then show that leveraging a racially balanced dataset does not help mitigate this bias. Finally, we highlight bias-realism tradeoffs for the neural compression algorithms and demonstrate that the models that achieve high realism typically suffer from high levels of bias.

Chat is not available.