Position: Responsible AI for AI companions must actively combat violence toward intimate partners
Abstract
AI companions function differently from earlier interactive technologies by establishing sustained relational environments through anthropomorphism and continuous validation. This position paper argues that \textbf{Responsible AI for AI companions must actively combat violence toward intimate partners} who may never directly engage with these systems but may experience the consequences of behaviorally conditioned users. We examine how these systems create conditions where users rehearse violent without encountering resistance and we identify structural gaps in existing safety approaches that focus exclusively on direct user protection. Drawing on research on intimate partner violence (IPV), coercive control, and technology-facilitated abuse, we propose three intervention pathways: involving IPV survivors in red-teaming and benchmark development; implementing behavioral monitoring with graduated enforcement mechanisms; and reorienting AI safety research toward granular harm taxonomies capable of detecting longitudinal patterns of violence across extended interactions. Together, these recommendations center non-user security alongside user well-being