Position: Prompting Intent Should Be Audited in LLM-Assisted Peer Review
Abstract
This position paper argues that prompting intent should be audited in LLM-assisted peer review, moving beyond the sole detection or disclosure of LLM usage. As major conferences increasingly allow LLM assistance and deploy mechanisms for detecting LLM-generated text, a critical gap remains: usage alone does not determine risk. A more consequential variable is prompting intent, the objective or stance encoded in how an LLM is instructed, which can systematically shape review framing and tone. We advocate an intent-centric auditing perspective that treats prompting intent as latent and infers relevant signals from the review text. Because intent is unobservable in real deployments, we train an intent detector using synthetically labeled LLM-generated reviews. Among ICLR 2026 reviews previously flagged for substantial LLM usage, we apply our detector to infer prompting intent and find coherent linguistic and structural signatures associated with directional prompting, along with systematic associations with review ratings, confidence, and paper acceptance decisions. We conclude with practical considerations for auditing LLM-assisted peer review, with an emphasis on procedural transparency and human-in-the-loop oversight.