Research Ethics
This document outlines ICML standards for ethical conduct of research. This complements our Peer-review Ethics policy, which focuses on integrity of peer review, and Code of Conduct, which focuses on professional conduct.
Authors submitting their work to ICML must follow the following guidelines, adapted from the NeurIPS Code of Ethics. Specifically, when there are risks directly associated with the proposed methods, methodology, application, or data collection and usage, authors are expected to make a reasonable effort in identifying these risks and provide a thoughtful discussion including a rationale of their decisions. Authors are strongly encouraged to propose mitigation strategies for risks identified whenever feasible. However, it is understood that some risks—particularly those arising from general-purpose methods being applied in unforeseeable or speculative ways—may fall outside the reasonable scope of the research and need not be addressed comprehensively.
The following categories will be used for flagging potential ethical issues during the review process.
- "Discrimination / Bias / Fairness Concerns"
- Example: Papers about applications where bias, fairness, and discrimination are a concern (e.g., hiring algorithms) should acknowledge these risks and, ideally, include analysis that directly addresses the relevant area of concern, with the expectation that such analysis is feasible and within a reasonable scope.
- "Inappropriate Potential Applications & Impact (e.g., human rights concerns)"
- Example: Papers about applications that have a direct connection to human rights issues (e.g., weapons) should provide a thoughtful discussion of the risks of the application. Where appropriate and within the scope of the work, authors may also suggest reasonable recommendations for mitigating these risks.
- "Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)"
- Example: While we acknowledge that standards for IRB vary across borders and institutions, research involving human subjects should provide evidence that it adhered to the authors’ home institution’s procedures for obtaining IRB approval or was eligible for an exemption.
- "Privacy and Security (e.g., personally identifiable information)"
- Example: Papers that rely on data that includes personally identifiable information should make reasonable efforts to ensure that individuals are not identifiable in the research outputs.
- "Legal Compliance (e.g., EU AI Act, GDPR, copyright, terms of use)"
- ICML is an international conference, and legal requirements will vary. Where appropriate, papers should provide evidence that local laws and regulations were followed.
- The issue of fair use of copyrighted material for training generative AI models is controversial. Submissions utilizing datasets that contain copyrighted material should acknowledge and address this in the impact statement.
- It is the responsibility of authors to ensure compliance with the EU AI Act, where applicable. The EU AI Act sets out a list of prohibited AI practices that pose an “unacceptable risk.” The ban on AI systems that pose an unacceptable risk comes into force on February 2, 2025. Banned AI systems include:
- AI systems that manipulate individuals' decisions subliminally or deceptively, causing or reasonably likely to cause significant harm.
- AI systems that exploit vulnerabilities like age, disability, or socio-economic status to influence behavior, causing or reasonably likely to cause significant harm.
- AI systems that evaluate or classify individuals based on their social behavior or personality characteristics, causing detrimental or unfavorable treatment.
- AI systems that assess or predict the risk of an individual committing a criminal offense based on their personality traits and characteristics.
- AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- AI systems that infer emotions in workplaces or education centers (except where this is needed for medical or safety reasons).
- AI systems that categorize individuals based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
- AI systems that collect “real time” biometric information in publicly accessible spaces for the purposes of law enforcement (except in very limited circumstances).
- "Research Integrity Issues (e.g., plagiarism, fraud, collusion rings, prompt injection, etc.)"
- Flagged issues in this category will bypass the ethics review process and be escalated to the program chairs for adjudication.
For a fuller discussion of these categories, see the NeurIPS Code of Ethics.