Poster
in
Workshop: 2nd Workshop on Generative AI and Law (GenLaw ’24)
Bias as a Feature
Uri Hacohen · Niva Elkin-Koren
The prevailing discourse on artificial intelligence (AI) and machine learning systems has raised concerns about embedded bias and its negative implications, portraying it as an inherent “bug.” This article challenges this monolithic narrative, suggesting that data-driven bias, particularly in the context of foundation models and generative AI (GenAI), could sometimes embed useful information about the world. Therefore, it should also be considered a “feature” rather than purely a bug. While acknowledging the genuine risks posed by such a bias, including discrimination against marginalized groups and the propagation of misinformation, we present evidence that underscores the potential benefits of data-driven bias in GenAI models for measuring bias and leveraging it in public policy contexts. First, we delve into the rise of bias-as-a-bug approach, explaining its causes and tracing its influence on public discourse and policymaking. Then, by drawing on interdisciplinary research spanning computer science and law, we contend that data-driven inductive bias in GenAI systems also presents unprecedented opportunities for societal introspection. Specifically, we offer three pathways through which this bias can positively inform legal policymaking: clarifying ambiguous open-ended legal standards, measuring latent social disparities, and empowering users with comprehensive societal perspectives.