Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy Multi-modal Foundation Models and AI Agents (TiFA)

Decomposed evaluations of geographic disparities in text-to-image models

Abhishek Sureddy · Dishant Padalia · Nandhinee Periyakaruppan · Oindrila Saha · Adina Williams · Adriana Romero Soriano · Megan Richards · Polina Kirichenko · Melissa Hall


Abstract:

Recent work has identified substantial disparities in generated images of different geographic regions, including stereotypical depictions of everyday objects like houses and cars. However, existing measures for these disparities have been limited to either human evaluations, which are time-consuming and costly, or automatic metrics evaluating full images, which are unable to attribute these disparities to specific parts of the generated images. In this work, we introduce a new set of metrics, Decomposed Indicators of Disparities in Image Generation (Decomposed-DIG), that allows us to separately measure geographic disparities in the depiction of objects and backgrounds in generated images. Using Decomposed-DIG, we audit a widely used latent diffusion model and find that generated images depict objects with better realism than backgrounds and that backgrounds in generated images tend to contain larger regional disparities than objects. We use Decomposed-DIG to pinpoint specific examples of disparities, such as stereotypical background generation in Africa, struggling to generate modern vehicles in Africa, and unrealistically placing some objects in outdoor settings. Informed by our metric, we use a new prompting structure that enables a 52% worst-region improvement and a 20% average improvement in generated background diversity.

Chat is not available.