Clarification on Large Language Model Policy LLM
We (Program Chairs) have included the following statement in the Call for Papers for ICML represented by 2023:
Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.
This statement has raised a number of questions from potential authors and led some to proactively reach out to us. We appreciate your feedback and comments and would like to clarify further the intention behind this statement and how we plan to implement this policy for ICML 2023.
TLDR;
- The Large Language Model (LLM) policy for ICML 2023 prohibits text produced entirely by LLMs (i.e., “generated”). This does not prohibit authors from using LLMs for editing or polishing author-written text.
- The LLM policy is largely predicated on the principle of being conservative with respect to guarding against potential issues of using LLMs, including plagiarism.
- The LLM policy applies to ICML 2023. We expect this policy may evolve in future conferences as we understand LLMs and their impacts on scientific publishing better.
Intention
During the past few years, we have observed and been part of rapid progress in large-scale language models (LLM), both in research and deployment. This progress has not slowed down but only sped up during the past few months. As many, including ourselves, have noticed, LLMs released in the past few months, such as OpenAI’s chatGPT, are now able to produce text snippets that are often difficult to distinguish from human-written text. Undoubtedly this is exciting progress in natural language processing and generation.
Such rapid progress often comes with unanticipated consequences as well as unanswered questions. As we have already seen during the past few weeks alone, there is, for instance, a question on whether text as well as images generated by large-scale generative models are considered novel or mere derivatives of existing work. There is also a question on the ownership of text snippets, images or any media sampled from these generative models: which one of these owns it, a user of the generative model, a developer who trained the model, or content creators who produced training examples? It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. However, we do not yet have any clear answers to any of these questions.
Since how we answer these questions directly affects our reviewing process, which in turn affects members of our research community and their careers, we must be careful and somewhat conservative in considering this new technology. OpenAI released the beta version of ChatGPT at the end of November 2022, which is less than two months ago. Unfortunately, we have not had enough time to observe, investigate and consider its implications for our reviewing and publication process. We thus decided to prohibit producing/generating ICML paper text using large-scale language models this year (2023).
Although we are prohibiting text generated by LLMs this year, we plan to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI. This decision will be revisited for future iterations of ICML.
Implementation and Enforcement
Regardless of what kind of technologies are available, we understand that many authors use external assistive technologies. Such assistive technologies include semi-automated editing tools, such as Grammarly, and various forms of online dictionaries and machine translation systems. Along these lines, we do not prohibit authors from using LLMs for light editing of their own text. In other words, as long as these LLM tools are used similarly to automate grammar checks, word autocorrect, and other editing tools, this new policy does not apply.
As some have pointed out, and as we are also well aware of ourselves, it is difficult to detect whether any given text snippet was produced by a language model. The ICML leadership team does not plan to implement any automated or semi-automated system to be run on submissions to check for the violation of the LLM policy this year (2023). Instead, we plan to investigate any potential violation of the LLM policy when a submission is brought to our attention with a significant concern about a potential violation.. Any submission flagged for the potential violation of this LLM policy will go through the same process as any other submission flagged for plagiarism.
As we learn more about consequences and impacts of LLMs in academic publication, and as we redesign the LLM policy in future conferences (after ICML 2023), we will consider different options and technologies to implement and enforce the latest LLM policy in future iterations.