ICML 2026 Policy for LLM use in reviewing
Summary
ICML 2026 will follow a two-policy framework for LLM reviewing, based on the following two policies:
- Policy A (Conservative):
Use of LLMs for reviewing is strictly prohibited.
- Policy B (Permissive):
Allowed: Use of LLMs to help understand the paper and related works, and polish reviews. Submissions can be fed to privacy-compliant* LLMs.
Not allowed: Ask LLMs about strengths/weaknesses, ask to suggest key points for the review, suggest an outline for the review, or write the full review
*By “privacy-compliant”, we refer to LLM tools that do not use logged data for training and that place limits on data retention. This includes enterprise/institutional subscriptions to LLM APIs, consumer subscriptions with an explicit opt-out from training, and self-hosted LLMs. (We understand that this is an oversimplification.)
Under both policies, reviewers are always responsible for the full content of their reviews.
Reviewers declare which policy they want to follow, and authors declare whether they require their papers to be reviewed under Policy A, or allow them to be reviewed under Policy B. Any reviewer who is an author on a paper that requires Policy A must also be willing to follow Policy A. Submissions are matched with compatible reviewers. Each reviewer is told which policy to follow on all of their assigned papers. (Full details of the two policies and the matching process are provided below.)
Guiding principles and the design process
ICML acknowledges a wide range of perspectives in our community on the use of LLMs in reviewing. This policy is designed with the following objectives in mind:
- Retain human assessment as a crucial part of the reviewing process.
- Meet reviewers where they are in their use of AI systems to do their work.
- Respect authors who do not want to have their papers fed to LLMs.
- Uphold the integrity and transparency of the peer-review process.
The two-policy framework is an effort to navigate these competing objectives. The design of the two policies was informed by two community surveys that the ICML 2026 program chairs and integrity chair conducted in November 2025. For details about the surveys and how they influenced our policies see Introducing ICML 2026 policy for LLMs in reviews.
Two policies
- Policy A (Conservative):
Use of LLMs in any stage of reviewing is strictly prohibited (except for inadvertent use in tools that are not traditionally LLM-based, like web search/retrieval and spelling/grammar checkers).
- Policy B (Permissive):
Allowed: Use of LLMs to help understand the paper and related works, and polish reviews. Submissions can be fed to privacy-compliant* LLMs.
Not allowed: Delegation of judgement and critique of the paper to LLMs. This includes asking LLMs to assess paper’s quality/significance, identify paper’s strengths/weaknesses, suggest key points for the review, suggest an outline for the review, write the full review, or suggest questions for authors.
*By “privacy-compliant”, we refer to LLM tools that do not use logged data for training and that place limits on data retention. This includes enterprise/institutional subscriptions to LLM APIs, consumer subscriptions with an explicit opt-out from training, and self-hosted LLMs. (We understand that this is an oversimplification.)
Under both policies, reviewers are expected to read the entire body of the paper, but not necessarily appendices and supplementary material. Reviewers are fully responsible for the entire content of their reviews, regardless of the AI tools that they use. In particular, any hallucinated content in reviews is subject to disciplinary action according to our academic integrity policy. Low-quality reviews may result in penalties according to our reciprocal reviewing policy.
Matching process
In our two-policy framework, reviewers state their preferred policy, and for each submitted paper, authors state whether they require A or allow B. Each reviewer is explicitly told a single policy that they must follow on all of their assigned papers: that's the reviewer’s Actual Policy.
- Papers that require A will be reviewed by reviewers with Actual Policy A.
- Papers that allow B will be reviewed by reviewers some of whom are assigned Actual Policy A and some of whom are assigned Actual Policy B.
- Reviewers who prefer Policy A will be assigned Actual Policy A.
- Reviewers who prefer B might be assigned Actual Policy either A or B. Our matching algorithm will try to minimize overrides from B to A.
Reciprocity requirement: Any reviewer who is also an author of a paper that requires Policy A must also be willing to follow Policy A.
Visibility: The policy chosen by each paper is only visible to the authors of that paper, but not to the reviewers or meta-reviewers, and will not be published in the proceedings. The policy preference and Actual Policy of each reviewer are only visible to that reviewer, but not to the authors, other reviewers, or meta-reviewers.
Randomization, monitoring, experiment: Program chairs may randomize some of the Actual Policy assignments within the constraints above. Randomization serves two goals. First, it is used to monitor differences between distributions of scores given by reviewers with Actual Policy A and Actual Policy B. If program chairs identify systematic differences, they'll act to compensate. Second, randomization is also used as part of an experiment to evaluate differences in outcomes between the two policies. The results will be used to inform future policies and might be published in a peer-reviewed publication.
Enforcement: Reviewers are required to strictly follow their assigned Actual Policy when reviewing and discussing papers. Any attempt by a reviewer to deviate from their assigned policy constitutes a violation of our academic integrity policy, and may result in actions by the program chairs (including the desk rejection of their own submissions).
Appendix: Examples for Policy B
Here are some examples of what is allowed and what is not allowed under Policy B:
Allowed:
- Feed submission into privacy-compliant LLM.
- Ask clarifying questions about the submission.
- Ask about background concepts or concepts introduced earlier in the paper.
- Ask LLM to help retrieve related works and identify connections/overlaps.
- Use LLM to help polish the final review.
Not allowed:
- Feed submission into an LLM that is not privacy-compliant
- Ask LLM to summarize the paper
- Ask LLM to evaluate strengths and weaknesses
- Ask LLM to write a review
- Ask LLM to suggest questions for authors
Example workflow with policy B:
- Feed the submission into a privacy-compliant LLM:
- Read through the paper yourself, asking the LLM clarifying questions about mathematical concepts and connections to related work.
- Use the LLM to explore related work.
- Outside the LLM environment:
- Write down the paper summary (possibly as bullets).
- Formulate strengths and weaknesses.
- Formulate your overall judgment and the 3 major reasons supporting it.
- Formulate clarification questions for authors.
- Use the LLM to polish your final review and questions (keep things concise and watch for embellishments and hallucinations).