Skip to yearly menu bar Skip to main content


Poster

Discovering Bias in Latent Space: An Unsupervised Debiasing Approach

Dyah Adila · Shuai Zhang · Boran Han · Yuyang Wang

Hall C 4-9 #702
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

The question-answering (QA) capabilities of foundation models are highly sensitive to prompt variations, rendering their performance susceptible to superficial, non-meaning-altering changes. This vulnerability often stems from the model's preference or bias towards specific input characteristics, such as option position or superficial image features in multi-modal settings. We propose to rectify this bias directly in the model's internal representation. Our approach, SteerFair, finds the bias direction in the model's representation space and steers activation values away from it during inference. Specifically, we exploit the observation that bias often adheres to simple association rules, such as the spurious association between the first option and correctness likelihood. Next, we construct demonstrations of these rules from unlabeled samples and use them to identify the bias directions. We empirically show that SteerFair significantly reduces instruction-tuned model performance variance across prompt modifications on three benchmark tasks. Remarkably, our approach surpasses a supervised baseline with 100 labels by an average of 10.86% accuracy points and 12.95 score points and matches the performance with 500 labels.

Chat is not available.