AI for Law Workshop
Abstract
Recent advances in machine learning have substantially improved general-purpose reasoning, multimodal understanding, and test-time scaling. Yet law remains a uniquely demanding and high-stakes domain that exposes the limits of generic AI progress. Many legal tasks require structured, long-form deductive and inductive reasoning grounded in doctrine, sensitivity to jurisdictional and linguistic variation, and robustness in settings where errors carry serious real-world consequences. They also raise fundamental questions about evaluation, fairness, and access to justice. Legal reasoning thus complements established AI reasoning domains, such as mathematics and coding, by emphasizing context-sensitive, norm-governed inference embedded in real-world institutions. This workshop centers on a core question: What does it mean for an AI system to be competent in law, and how can such competence be built, evaluated, and validated across jurisdictions and languages while enabling equitable access to justice? We structure the discussion around three interconnected themes: - AI for Legal Reasoning, focusing on domain-specific supervision, doctrinal grounding, and task design for robust legal inference; - AI Evaluation for Law, addressing reliable, risk-aware, and jurisdiction-sensitive evaluation paradigms; and - AI for Access to Justice, examining the technical and institutional conditions under which AI systems improve, or risk undermining, equitable legal access. To operationalize these themes, we will host a multilingual shared task on long-form legal reasoning across jurisdictions and languages, emphasizing doctrinal and jurisdictional grounding, reasoning quality, and cross-lingual robustness. By bridging machine learning and legal scholarship, the workshop aims to articulate a research agenda for AI systems that are not only more capable, but also more legally grounded and socially responsible.