Skip to yearly menu bar Skip to main content


ICML 2023 Paper Guidelines

The ICML Paper Guidelines is based on the NeurIPS 2022 Paper checklist, introduced in 2021 (see blog post). Our goal is to promote best practices for responsible machine learning research, including reproducibility, ethics, transparency, and impact on society. You should check your paper and supplemental materials in the context of each part of our guidelines.

Reviewers will be asked to also consider these guidelines as a factor in their evaluations. 

In particular, focusing on best research practices…

  • Have you read the publication ethics guidelines and ensured that your paper conforms to them?
  • Did you discuss any potential negative societal impacts (for example, disinformation, privacy, fairness) of your work?
    • For example, these could include if you see a direct path to negative applications, some stakeholders may be impacted negatively even if others are benefited, consider possible harms through intended use or misuse. 
    • You may also find this unofficial guidance and other resources at the broader impacts workshop at NeurIPS 2020 helpful.
  • If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
    • If your work uses existing assets, did you cite the creators and the version?
    • Did you mention the license of the assets?
      • If you scraped data from a particular source (e.g., Twitter), you should state the copyright and terms of service of that source.
      • If you are releasing assets, you should include a license, copyright information, and terms of use in the package.
      • If you are using an existing dataset, check paperswithcode.com/datasets, which has curated licenses for some datasets. Use their licensing guide to determine the license of a dataset.
      • If you are repackaging an existing dataset, you should state the original license as well as the one for the derived asset (if it has changed).
    • Did you include any new assets either in the supplemental material or as a URL?
      • For initial review, anonymize your assets. You can either create an anonymized URL or include an anonymized zip file.
      • If you cannot release (e.g., the asset contains proprietary info), state why.
    • Where possible, discuss whether and how consent was obtained from people whose data you're using/curating 
    • Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
      • This question is to encourage discussion of potentially undesirable properties and the goals of the research.
      • Offensive material (as data) in your manuscript may target specific vulnerable populations, and a reviewer may be a member of that population. Please refer to the Code of Conduct when selecting materials for inclusion in the manuscript to avoid this type of targeting.
  • If you used crowdsourcing or conducted research with human subjects...
    • Did you include the instructions given to participants including the consent form?
    • Did you describe any potential participant risks?
      • Examples of risks include a crowdsourcing experiment that might show offensive content or collect personal identifying information (PII).
      • If you obtained IRB approval, you should clearly state this in the paper. For initial submissions, do not include information that would break anonymity.
    • Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?

Next, we review best practices for paper writing and structure…

  • Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
  • Did you describe the limitations of your work?
  • If you are including theoretical results...
    • Did you state the full set of assumptions of all theoretical results?
    • Did you include complete proofs of all theoretical results?
  • If you ran experiments...
    • Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL, in an anonymized way at the review time)?
      • Please include a README about how to reproduce the results in the paper using the provided code. 
      • While we encourage release of code and data, we understand that this might not be possible, so "no because the code is proprietary" is acceptable. 
    • Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
    • Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
    • Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?