Reviewer Guidelines

(Adapted from the NIPS 2014 reviewer instructions)

## Executive Summary

• The deadline for reviews is March 20. Borderline papers may get an extra review during the discussion phase.
• Please let us know if you identify dual-submissions that violate our policy. The policy is detailed in the author instructions.

## Reviewing Timeline

• February 22: Start of reviewing process.
• March 20: Reviewing deadline. Reviews must be completed and entered into CMT.
• March 20 – March 27: Initial discussion period and possible additional reviews.
• March 27 – March 30: Author feedback period.
• March 30 – April 10: Reviewers respond to author feedback, and final discussion period between Reviewers and Area Chairs.

## Introduction

Thank you for reviewing for ICML! Your help is vital to the community: the technical content of the program is largely determined by the efforts and comments of the reviewers. Below are instructions and guidelines that will help you write reviews efficiently and effectively.

## Conference system access

All reviews must be entered electronically into the CMT system ; login help is available from this page.

During the review period, you will probably get many emails sent from CMT (e.g., those telling you your paper assignments). Please make sure emails from CMT are not snagged by your spam filter!

## Confidentiality

By viewing the papers, you agree that the ICML review process is confidential. Specifically, you agree not to use ideas and results from submitted papers in your work, research or grant proposals, unless and until that material appears in other publicly available formats, such as a technical report or as a published work. You also agree not to distribute submitted papers or the ideas in the submitted papers to anyone unless approved by the program chairs.

## Double blind reviewing

This year, we will continue to use double blind reviewing. The authors do not know the identity of the reviewers; this also holds for authors who are on the program committee. In addition, the reviewers do not know the identity of the authors. Note, however, that the area chairs do know the author identities. This helps avoid accidental conflicts of interest.
Of course, double blind reviewing is not perfect: by searching the Internet, a reviewer may discover (or think he/she may have discovered) the identity of an author. We encourage you not to actively attempt to discover the identities of the authors. If you have good reason to suspect that a paper has been published in the past, you can go and search on the Internet, but we ask that you first completely read the paper. Also, based on the experience of other double-blinded conferences, we caution reviewers that the assumed authors may not be the actual authors; multiple independent invention is common, and different groups build on each others’ work.
If you believe that you have discovered the identity of the author, please let us know in the “Confidential comments to PC members” in your review (see below).

## Supplementary material

Some papers include supplementary material. Submission of additional material is allowed: authors can submit up to 10 MB of material, containing proofs, audio, images, video, or even data or source code.
Your responsibility as a reviewer is to read and judge the main paper. Reading the supplementary material is optional. However, keeping in mind the space constraints of an ICML paper, you may want to consider looking at the supplementary material before complaining that the authors did not provide a fully rigorous proof of their theorem, or only demonstrated qualitative results on a small number of examples.

## Formatting issues

We ask that you double-check that your papers have followed the submission guidelines with respect to length (i.e. they are at most 10 pages, where the ninth and tenth page must contain only references), format, and anonymity, and that they are not joke papers! Please notify the area chair of the paper if you find serious formatting issues or anonymity problems with a submitted paper.

## Previously published work and dual-submissions

Where possible, reviewers should identify submissions that are very similar (or identical) to versions that have been previously published, or that have been submitted in parallel to other conferences. A clarification of this policy is given in the Author and Submission Instructions. If you detect violations of the dual submission policy please note this in the “Confidential comments to PC members” section of the review form.

## Writing your reviews: Review Content

We ask that reviewers pay particular attention to the question: does the paper add value to the ICML community? We would prefer to see solid, technical papers that explore new territory or point out new directions for research, as opposed to ones that may advance the state of the art, but only incrementally. A solid paper that heads in a new and interesting direction is taking a risk that we want to reward.
The high quality of ICML depends on having a complete set of reviews for each paper. Reviewer scores and comments provide the primary input used by the program committee to judge the quality of submitted papers. Far more than any other factor, reviewers determine the scientific content of the conference. However, we also stress that short superficial reviews that venture uninformed opinions about a paper are damaging. They may result in the rejection of a high quality paper that the reviewer simply failed to understand. Please take the time to fully assess the paper.

## Overview

You’ll be asked to give a Quality Score and Confidence for each paper (see Quantitative evaluation section, below). Please explain your evaluation assessments in the “Comments to author(s)” text box provided.
Your written review should begin by summarizing the main ideas of each paper and relating these ideas to previous work at ICML and elsewhere. While this part of the review may not provide much new information to authors, it is invaluable to members of the program committee, and it demonstrates to the authors that you understand their paper. You should then discuss the strengths and weaknesses of each paper, addressing the criteria described in the Qualitative Evaluation section, below. Please read the review criteria and use those to guide your decisions. It is tempting to include only weaknesses in your review. However, it is important to also mention and take into account the strengths, as an informed decision needs to take both into account. It is particularly useful to include a list of arguments pro and con acceptance.
Finally, please fill in the “Summary of review” — this should be a short 1-2 sentence summary of your review.
Importantly, reviewer comments should be detailed, specific and polite, avoiding vague complaints and providing appropriate citations if authors are unaware of relevant work. As you write a review, think of the types of reviews that you like to get for your papers. Even negative reviews can be polite and constructive! Remember that you are assessing the paper’s quality as a scientific contribution to the field.
If you have information that you wish only the program committee to see, you may fill in the “Confidential comments to PC members” box. The confidential comments to the program committee have many uses. Reviewers can use this section to make recommendations for oral versus poster presentations, to make explicit comparisons of the paper under review to other submitted papers, and to disclose conflicts of interest that may have emerged in the days before the reviewing deadline. You can also use this section to provide criticisms that are more bluntly stated.

## Quantitative Evaluation

Reviewers choose an overall rating between four classes for each paper. The program committee will interpret these classes in the following way:

Strong accept: An excellent paper, well above the acceptance threshold.

I vote and argue for acceptance.

Weak accept: A good paper, above the acceptance threshold.

I vote for acceptance, although would not be upset if it were rejected.

Weak reject: A decent paper, but just below the acceptance threshold.

I vote for rejecting it, although would not be upset if it were accepted.

Strong reject: A paper below the acceptance threshold.

I vote and argue for rejection.

Reviewers should NOT assume that they have received an unbiased sample of papers, nor should they adjust their scores to achieve an artificial balance of high and low scores. Scores should reflect absolute judgments of the contributions made by each paper.

## Confidence Score

Reviewers also choose a confidence rating between three classes for each paper. The program committee will interpret these classes in the following way:

Reviewer is an expert: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature.

Reviewer is knowledgeable: The reviewer is fairly confident that the evaluation is correct. It is possible that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. Mathematics and other details were not carefully checked.

Reviewer’s evaluation is an educated guess: Either the paper is not in the reviewer’s area, or it was extremely difficult to understand.

## Qualitative Evaluation

All ICML papers should be good scientific papers, regardless of their specific area. We judge whether a paper is good using four criteria; a reviewer should comment on all of these, if possible:

#### Quality

Is the paper technically sound? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful (and honest) about evaluating both the strengths and weaknesses of the work?

#### Clarity

Is the paper clearly written? Is it well-organized? (If not, feel free to make suggestions to improve the manuscript.) Does it adequately inform the reader? A superbly written paper provides enough information for the expert reader to reproduce its results.

#### Originality

Are the problems or approaches new? Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Is related work adequately referenced? Consider checking the proceedings of recent machine learning conferences to make sure that each paper is significantly different from papers in previous proceedings.

#### Significance

Are the results important? Are other people (practitioners or researchers) likely to use these ideas or build on them? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach?

## Author feedback and reviewer consensus

Between March 27 and March 30, authors will have a chance to submit feedback on their reviews. This is an opportunity to correct possible misunderstandings about the contents of the paper, or about previous work. Authors may point out aspects of the paper that you missed, or disagree with your review.
It is important to convey to the authors that their comments were read, even if they do not change the final evaluation of the paper. Therefore, please read each rebuttal carefully and keep an open mind. Do the authors’ comments make you change your mind about your review? Have you overlooked something? There is a separate box for reviewers to write responses to the author feedback. Please use this to let the authors know that you read and absorbed their rebuttal, and whether it swayed you one way or the other.
From March 20 to April 10, the area chairs will, where necessary, lead a discussion via the website and try to come to a consensus amongst the reviewers. The discussion will involve both marginal papers, trying to reach a decision on which side of the bar they should fall, and controversial papers, where the reviewers disagree. Many papers fall into these categories, and therefore this phase is a very important one. While engaging in the discussion, recall that different people have somewhat different points of view, and may come to different conclusions about a paper. It may be helpful to ask yourself “do the other reviewers’ comments make sense?”, and “should I change my mind given what the others are saying?” Reviewer consensus is valuable, but is not mandatory. If the reviewers do come to a consensus, the program committee takes it very seriously; only rarely is a unanimous recommendation overruled. However, we do not require conformity: if you think the other reviewers are not correct, you are not required to change your mind.

## Conflicts of interest

francis.bach@inria.fr, david.blei@columbia.edu