Position: Child Safety Necessitates New Approaches to AI Safety
Abstract
Modern artificial intelligence (AI) systems have transformative potential across many domains, but also present profound new risks to child safety. AI is increasingly being misused to create AI-generated child sexual abuse material, facilitate child sexual exploitation, and reduce barriers to harm. In this position paper, we argue that protecting children from AI-facilitated abuse requires new approaches to AI safety. Existing safety techniques assume data accessibility, transparency, and evaluation practices that are incompatible with the ethical and legal constraints surrounding child sexual abuse material. We examine how these constraints create new technical challenges, such as limitations on dataset auditing, red teaming, and fine-tuning prevention. In turn, we outline 15 open problems in child safety across the AI development lifecycle---from dataset curation and model design to deployment and long-term maintenance. We propose targeted recommendations for researchers, developers, and policymakers to bridge the gap between theoretical AI safety and the realities of child protection. Our work aims to reframe child safety as a central, safety-critical dimension for AI research, motivating new work that translates responsible AI principles into concrete safeguards against the exploitation of children.