Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Conversational AI Workshop - What’s left to TEACH (Trustworthy, Enhanced, Adaptable, Capable and Human-centric) chatbots?

Scalable Conversational Moderation: Promoting Constructive Dialogue to Reduce Online Polarization

Hyundong Cho · Jonathan May


Abstract:

As the number of online users continues to grow and societal polarization deepens, there is a growing need for effective moderation strategies that can scale alongside these trends. Traditional approaches to automated moderation, such as banning or deleting comments, often exacerbate polarization by driving users toward echo chambers. In this paper, we propose a novel approach to automatic moderation called conversational moderation that leverages conversational AI as moderators to create a more accommodating and constructive online environment. In this paper, we propose a novel approach to automatic moderation called conversational moderation that leverages conversational AI as moderators to create a more accommodating and constructive online environment. In this paper, we present the first study that leverages large language models as conversational AI to function as moderators and evaluates their performance in guiding simulated continuations of controversial conversations on Reddit towards more constructive outcomes. We take an iterative strategy to prompt engineering using self-talk to adapt large language models as various types of moderator bots. Our preliminary experiments reveal that prompts integrating conflict resolution and effective communication techniques can yield improvements in coherency and understandingness, but the high level of subjectivity in this task renders these results statistically insignificant. Our findings thus far demonstrate that even state-of-the-art language models often repeating boilerplate guidelines and thus fail to effectively conduct conversational moderation.

Chat is not available.