Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

AIs Favoring AIs: Large Language Models Favor Their Own Generated Content

Walter Laurito · Benjamin Davis · peli grietzer · Tomáš Gavenčiak · Ada Böhm · Jan Kulveit


Abstract:

Are large language models (LLMs) biased towards text generated by LLMs over text authored by humans, leading to possible anti-human bias? Utilizing a classical experimental design inspired by employment discrimination studies, we tested widely-used LLMs, including GPT-3.5 and GPT-4, in binary-choice scenarios. These involved LLM-based agents selecting between products and academic papers described either by humans or LLMs under identical conditions. Our results show a consistent tendency for LLM-based AIs to prefer LLM-generated content. This suggests the possibility of AI systems implicitly discriminating against humans, giving AI agents an unfair advantage.

Chat is not available.