Skip to yearly menu bar Skip to main content


Invited Talk 5
in
Workshop: The How2 Challenge: New Tasks for Vision & Language

Overcoming Bias in Captioning Models

Lisa Anne Hendricks

[ ]
[ Video
2019 Invited Talk 5

Abstract:

Most machine learning models are known to capture and exploit bias. While this can be beneficial for many classification tasks (e.g., it might be easier to recognize a computer mouse given the context of a computer and a desk), exploiting bias can also lead to incorrect predictions. In this talk, I will first consider how over-reliance on bias might lead to incorrect predictions in a scenario where is inappropriate to rely on bias: gender prediction in image captioning. I will present the Equalizer model which more accurately describes people and their gender by considering appropriate gender evidence. Next, I will consider how bias is related to hallucination, an interesting error mode in image captioning. I will present a metric designed to measure hallucination and consider questions like what causes hallucination, which models are prone to hallucination, and do current metrics accurately capture hallucination?

Chat is not available.