Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations

Zilin Ma · Susannah (Cheng) Su · Nathan Zhao · Linn Bieske · Blake Bullwinkel · Jinglun Gao · Gekai Liao · Siyao Li · Ziqing Luo · Boxiang Wang · Zihan Wen · Yanrui Yang · Yanyi Zhang · Claude Bruderlein · Weiwei Pan

Keywords: [ Humanitarian Negotiations ] [ AI Ethics ] [ Large Language Models (LLMs) ] [ AI-assisted Decision Making ] [ Artificial Intelligence (AI) ] [ Data Privacy in AI ] [ AI Safety ]


Abstract:

Humanitarian negotiations in conflict zones, called frontline negotiation, are often highly adversarial, complex, and high-risk. Several best-practices have emerged over the years that help negotiators extract insights from large datasets to navigate nuanced and rapidly evolving scenarios. Recent advances in large language models (LLMs) have sparked interest in the potential for AI to aid decision making in frontline negotiation. Through in-depth interviews with 13 experienced frontline negotiators, we identified their needs for AI-assisted case analysis and creativity support, as well as concerns surrounding confidentiality and model bias. We further explored the potential for AI augmentation of three standard tools used in frontline negotiation planning. We evaluated the quality and stability of our ChatGPT-based negotiation tools in the context of two real cases. Our findings highlight the potential for LLMs to enhance humanitarian negotiations and underscore the need for careful ethical and practical considerations.

Chat is not available.