Position: AI Usage Policies Should Be Aligned with International Human Rights Law
Abstract
Concerns about misinformation and disinformation are central to debates on the governance of generative AI services, yet guidance on when and how providers should restrict mis/disinformation while respecting freedom of expression remains underdeveloped. AI usage policies are a primary mechanism of user guidance and, in practice, operate as a form of private speech governance with direct implications for users’ ability to seek, receive, and impart information. Building on international human rights law—especially ICCPR Article 19 and its legality, legitimacy, and necessity/proportionality requirements—this position paper proposes a set of concrete, checkable criteria for evaluating disinformation-related restrictions in usage policies, in a way that machine learning teams can operationalize when drafting rules and enforcement guidance. We apply the criteria to a comparative snapshot of eight leading providers’ public policies (as of January 21, 2026) and find recurring shortcomings, including vague prohibitions, under-specified theories of harm, and limited articulation of less-restrictive alternatives. We argue that aligning usage policies with Article 19 can improve clarity and consistency, constrain overreach, and offer a principled basis for managing disinformation risks in AI-mediated information environments.