International Conference on Machine Learning June 26–July 1, 2012 — Edinburgh, Scotland

« back to conditions

ICML 2012 Survey Results

General comments

Conditioned on questionee being an author of an accepted paper

  1. Very strongly - ONLY COLT or UAI should be co-located with ICML. The focus area of ICML (COLT/UAI) is different from other applied conference and I feel co-locating it with any other conference would have more adverse effects than good. If applications need to blended in with ICML, I would suggest having focused workshops in ICML rather than co-locating with an applied conference .

  2. I think ICML should have a Doctoral Consortium with some specific objectives. Some of them could be: 1) 5-10 mins presentation (accompanied by a poster in another session) by each candidate describing their work at a high level and its implications. This will serve three purposes: a) help both candidates and potential employers in their job search (academia or industry); b) community will get to know the best of the works in such a short time; c) candidates can communicate and network with each other (exchange of valuable information)

    This is not something new -- Theory conferences like ITCS (http://research.microsoft.com/en-us/um/newengland/events/itcs2012/program.htm) has it in the following name: Graduating bits Finishing Ph.D.'s and Postdoc Short Presentations.

    We could also consider recognizing the Best Ph.D dissertation every year. This is also not new -- conferences like KDD regularly do this.

  3. ICML only accepts a small fraction of papers - it is NOT true that only a small fraction of work going on in the field is worthy of presentation to the community. We are hurting our field by having only a small fraction of work being presented. We are also hurting our field by forcing reviewers to review things not in their area, and providing (often harsh and unjustified) reviews about things they do not know much about. This limits the accepted papers to mainly those for which the set of reviewers is very clear, and papers for which there is a precedent for how to review it. The papers that are really novel or define a new area are almost always rejected. I can give many examples of this just in my own experience, where a paper was either nominated or given a reward or accepted to a major journal in the field, and then rejected from a machine learning conference with poor quality reviews. Unless we figure out a way to accept more papers, ask reviewers not to pass judgement on papers that they truly have no experience working in or little expertise, and have a better assignment of reviewers to papers, we will continue to reject some of the most novel work going on in machine learning today.

  4. This conference was extremely well executed.

    I felt 'taken care of' and that there were safety nets for poor decisions at many levels.

    My only complaint is (and perhaps this is my mistake?) that the ability to upload revisions was not well publicized. so some people that coincidentally had more time that week were able to do so, other papers not. similarly, some papers had reviewers that read that stuff, some did not.

    In general, I think rebuttals are excellent (can clarify gross misunderstandings, which are inevitable due to time constraints), but that revisions are strange; in particular, it is vague how much they should matter. For one paper I reviewed, the revisions resulted in an almost entirely new paper. It was frustrating, demanded a lot of time, and it was unclear if it was fair. What was the point of the first round of revisions? Also, with enough effort and pandering to the reviewers, could the revision phase accidentally cause a bad paper to be accepted?

    Lastly, I think it is important that reviewers assign scores BEFORE seeing other reviews (this was discussed on hunch.net, or perhaps by jl in comments); I think it is important for the first round of reviews to be as close to independent as possible. One review I took part in, a super-famous person showed up, said the paper was terrible, and it was simply impossible to have discussion thereafter. The single person who had liked the paper beforehand (not me) completely wilted. But I felt they made valid points, and the paper was actually rather borderline.

  5. My submission was in learning theory. The reviewers appeared not to be. The paper was accepted in the end, so perhaps its all acceptable noise.

    With online proceedings, follow the lead of (eg. COLT) and allow arbitrary length appendices. By all means impose a rigid 8 page limit on the main body of the paper.

  6. Everything is good except for the two-tiered talk system based on reviewer scores. Reviews rarely contain more than 1 bit (accept/reject) of useful information about the quality of a paper.

  7. My responses to 'The majority of the reviews of your paper were...' is biased. Of my two papers, one seemed to have mostly reasonable reviews.

    One had two reviewers, one of whom did not understand the paper and one of whom appeared to have a vendetta against the research. The 'vendetta' reviewer gave a three sentence review, defending a paper that we pointed out had significant errors, and did not address our feedback at all.

    It might be nice to implement something so that obviously sub-standard reviews--for example, extremely short or abrasive reviews--were disallowed, or noted by area chairs.

  8. One day or half day should be really added.

  9. I'm looking forward to ICML and appreciate the opportunities it presents. I hope to have some very interesting conversations and collaborations.

    With regard to reviews: One of our reviewers seems not to have actually read the paper, as their comments were surface-level (from the opening sentences of the introduction and results) and tangential (including references to literature which did not relate to our paper). I'm not sure if this is because we selected topic areas poorly and we were assigned someone who didn't know where to start, or perhaps because that reviewer simply found our topic not worth their time. While I can't know that reviewer's state of mind, I believe that a different review would have been much more helpful to me as an author, and to our area chairs, who ultimately needed to decide whether or not to accept our paper. Fortunately for us, the paper was in fact accepted, but I also wonder if that review / some aggregation of reviews influenced our selection for a short talk, vs. a long-format talk.

  10. I would like to see all accepted papers as full presentations.

  11. There is considerable variance in the reviews that this form fails to capture. In the case of one of my papers, the reviewers did not understand the paper, while for some others they understood most of it. I don't blame the reviewers, but I tend to blame the area chairs for not pushing for better reviews and for making sure that the reviewers do understand the papers. However, this is not to say I was very disappointed with the reviews in general.

  12. Regarding the question - The majority of the reviews of your paper were:

    more appropriate answer is that it seems one of the reviewers had something else in mind while reading the problem statement. Hence, i feel that he did not understood the whole paper.

    But I am satisfied with the other two reviewers comments.

  13. The two-stage reviewing in the previous year (with two deadlines) worked very well, please bring it back.

    Also:

    having full talks in same timeslots as multiple short talks is a terrible idea. Please don't mix the two: it's the worst of both worlds.

  14. Secondary AC was not helpful Reviewer assignment should be automated (not done by AC)

  15. I strongly dislike the idea of being able to upload a new version of the paper after reviews.

    As a reviewer, it massively increased my workload, as I suddenly had to re-read a number of papers in the very short time between rebuttals and decision (there was no way around doing so, as some authors used the rebuttal form to simply point to lines in the new version where they had supposedly addressed criticism). In the end, none of the uploaded new material changed my mind about a paper.

    As an author, I felt pressured to do like everyone else and upload a new version, even though the reviewers had little concrete comments other than typos. I didn't do so in the end, because I was attending another conference during the rebuttal phase and did not have enough time. My paper got accepted anyway, but I was worried for the entire decision phase that the reviewers would hold this decision against me (as somehow showing a lack of respect for the review process). Even if I had been in the lab during this period, the short time frame would only have sufficed for quick and dirty corrections.

    I much prefer the old deal: Everyone has the same pre-agreed deadline, everyone gets their fair shot at acceptance, and rejected papers then have a real chance of improvement before the next deadline.

  16. The lack of direct accountability of area chairs remains a problem. ACs can reject (or accept) a paper without any consequences. In some journals like JMLR, the associate editor assigned to a paper is known to the authors. Therefore, all decisions are taken much more carefully. This comment applies to all ML conferences that I know of.

    Of course, this might create pressure on junior faculty but the number of biased decisions could be greatly reduced. Careers (in particularly the ones of junior researchers) are decided on decisions which are opaque.

  17. The reviews in other areas of computer science (like theoretical cs) are much more thorough and correct.

  18. I have this unsatisfactory feeling that for a lot of papers, the final decision turns out to be mostly based on chance (this feeling is shared with NIPS).

    From the reviewer point of view, there are so many submitted papers that I tended to assign a lot of weak rejects to help making a choice, since a lot of papers would have to be rejected in the end; only for papers I really thought they had to be accepted did I assign an accept (either weak or not). In retrospect, I wonder whether this was a good choice, and I think if this policy was not shared by most reviewers, it is not a good one.

    From the author point of view, it is obviously always painful to have one's papers rejected. I submitted 3 papers, and had 0 accepted. 2 were heavy in maths, introducing novel ideas, with some experiments, 1 was introducing a novel idea with quite a lot of experiments to assess it. A large majority of reviews were unsatisfatory, either missing completely the point (probably it was stated too unclearly), or showing a lack of background from the reviewer to be able to raise the really good questions. In the end, I find it very difficult to tune a paper for a submission to ICML: most of the times, it is found either too theoretical, or too practical. At the same time, we have the feeling that some (a lot of?) accepted papers should have been blamed as ours...

    ICML and NIPS are victim of their success. I have the feeling that something around 40 % (or more?) of submitted papers are worth being accepted. It is really a waste of time, effort, ... for the whole community to reject so many good papers. The presentation of new results is delayed, this is not good for research, and not good for authors. I think ICML should grow to a larger number of parallel sessions; I totally dislike the idea of having 20+ sessions in parallel as in certain conferences, but at ICML, there are half-days during which no session has any appeal to me. Having one or two extra sessions in parallel, thus accepting more papers, would be interesting.

    Finally, I acknowledge the effort you do this year to try new things to improve the selection process at ICML.

  19. I feel like having an author response is incredibly helpful as both an author and a reviewer. As an author, even though reviews haven't necessarily changed, I feel like the discussion/paper decision seemed to have been influenced by rebuttal. As a reviewer, the responses have helped clarify aspects of a paper and has helped me feel more/less confident about my review. ECML /PKDD did not have rebuttals this year, and as a reviewer I really didn't like it.

    I really liked being able to see the final score the authors gave to the paper. That should be done every year.

    In terms of the review process, I think reviewers should give more credit to looking at new problems/applications in the work. I had one paper that I felt was partially penalized by a reviewer because we only looked at a suite of problems in a new and important application domain but did not run on some standard benchmarks. I think credit should be given for looking at a new application when there is a technical innovation required to analyze that domain.

    I also think that care should be taken to ensure that there is diversity in terms of the papers submitted and the papers accepted. This year, for the first time, I felt like AAAI had more ML papers submitted that were of interest to me than ICML did.

    I also think getting the camera-ready instructions and presentation/poster instruction out a bit earlier would be good.

    I also think it may be worth while to make the submission paper length be 8 pages but give accepted papers a 9th page b/c reviewers usually (correctly) ask for additional detail in the paper.

  20. I cannot tell whether the rebuttal or paper submission was considered by reviewers. You should ask the reviewers that.

  21. It would be nice if the reviewers could see the final decisions including meta-reviews!

  22. secondary meta-reviews added an unreasonable amount of workload, and the instructions regarding them apparently changed during the process

  23. I like the opportunity to have a look at all the papers during the evening poster sessions.

  24. On three reviews, I would like to mention that I had a 'Right to the point' review.

  25. I think it would be nice to leave more time for authors' rebuttal phase. This year there were only 3 days, during which the authors needed to prepare the rebuttal(s) and update their paper(s); can be hard to finish on time for people who co-author more than one submission.

  26. Tone and quality of meta reviews were particularly bad this year.

  27. Outstanding transparency of the whole process this year. THANKYOU.

  28. Reviewers (including me) have a tendency to do the reviews at the last moment possible and this results in cramming all the reviews and lowering the quality. It would be good if the reviewers can be forced to spread out the reviewing load over time. For instance, the two stage reviewing that had been tried in the past years achieved this.

  29. I really really like that every paper has a 20 minute talk, and I'm quite disappointed it's not the case this year...

  30. The paper matching system seems work well.

  31. We were very happy that our paper was accepted, but had a very poor sense of what the reviewers discussed after receiving revisions. The reviews listed on CMT at the end were unchanged from before, but the meta reviews hinted that the reviewers were 'very pleased' with the revisions. Furthermore, it wasn't clear if the accept/reject decisions were the reviewers' initial response or the response after revisions. In general, more transparency would be helpful for understand what was or wasn't well received.

  32. The question about the quality of the majority of reviews of our paper hides a fair bit of variance -- we had one highly-confident but wrong review, while the others were good. We were pleased to see that the ICML review process was robust to this problem.

  33. Allowing substantial changes and uploads of new versions of submissions moves this conference too much towards a journal model. The aim should be to decide whether the snapshot of the work taken at submission time is ready for prime time or not.

    As an area chair, I did not like the procedure of selecting reviewers, as for many of the papers I was responsible for the most appropriate reviewers were at their quota. One other quibble I had was that I do not think the policy of allowing the main AC, sometimes with the help of the secondary, to make the accept decisions is a good idea. Some mechanism for having broader discussions about borderline papers amongst ACs would be good.

  34. If revised papers are going to be allowed during the response period, there need to be more precise and constraining instructions on what edits are allowed. Some authors took this as an opportunity to, essentially, completely rewrite their paper. It is unfair to expect reviewers to evaluate an entirely new manuscript on short notice, and unfair to authors if they don't know what changes they can expect to be evaluated.

  35. My only disappointment is regarding the assignment of durations to the talks. From the reviews we had (three strong accepts), we were quite confident that the paper would have a 20 minute slot. However it ended up with a 5 minute one. If there were some additional hidden criteria, it would have been nice to know about them. Otherwise, from outside it seems like the 5 minute talks were given to the borderline papers.

  36. Organizers could give a price to outstanding reviewers (as in CVPR) to encourage and reward good reviewers.

  37. The Program Chairs did a *fantastic* job this year.