International Conference on Machine Learning June 26–July 1, 2012 — Edinburgh, Scotland

« back to conditions

ICML 2012 Survey Results

General comments

Conditioned on questionee being a reviewer

  1. I think the author feedback is mostly a bogus exercise. I am new to ICML, but my impression is that the paper in the proceedings is the 'last word' on that work and that most people do not submitted extended versions to JMLR or elsewhere. Based on what I felt was stiff resistance to reasonable criticism of the paper, I should have voted to reject the paper -- however, I thought the ideas were nice so I said 'sure, maybe it will raise some issues.' The review process should be a conversation, not a confrontation, and there should be better instruction to authors and reviewers about what the point of the response period is.

  2. ICML only accepts a small fraction of papers - it is NOT true that only a small fraction of work going on in the field is worthy of presentation to the community. We are hurting our field by having only a small fraction of work being presented. We are also hurting our field by forcing reviewers to review things not in their area, and providing (often harsh and unjustified) reviews about things they do not know much about. This limits the accepted papers to mainly those for which the set of reviewers is very clear, and papers for which there is a precedent for how to review it. The papers that are really novel or define a new area are almost always rejected. I can give many examples of this just in my own experience, where a paper was either nominated or given a reward or accepted to a major journal in the field, and then rejected from a machine learning conference with poor quality reviews. Unless we figure out a way to accept more papers, ask reviewers not to pass judgement on papers that they truly have no experience working in or little expertise, and have a better assignment of reviewers to papers, we will continue to reject some of the most novel work going on in machine learning today.

  3. As a reviewer, 6 papers is at the boundary of being too many for me to do a careful job with each one. I spend 2 to 3 hours per paper on the initial review, and then some additional time on the difficult papers in discussion. I feel that this is a lot of time overall, but at the same time I feel it is the bare minimum time necessary for me to do a reasonable job on the reviews.

    I come from operations research, rather than computer science, and a large part of my reviewing is for journal papers within operations research and statistics. When reviewing for those venues I am given one paper and several weeks (even as long as three months) to review it. I generally spend at least 4 hours on each paper in these venues. Now, conference papers are shorter and thus somewhat easier to review than journal papers, and I understand that this reviewing load is typical for computer science program committees, but nevertheless, my feeling is that having fewer papers per reviewer would cause the quality of the reviews to improve.

  4. This conference was extremely well executed.

    I felt 'taken care of' and that there were safety nets for poor decisions at many levels.

    My only complaint is (and perhaps this is my mistake?) that the ability to upload revisions was not well publicized. so some people that coincidentally had more time that week were able to do so, other papers not. similarly, some papers had reviewers that read that stuff, some did not.

    In general, I think rebuttals are excellent (can clarify gross misunderstandings, which are inevitable due to time constraints), but that revisions are strange; in particular, it is vague how much they should matter. For one paper I reviewed, the revisions resulted in an almost entirely new paper. It was frustrating, demanded a lot of time, and it was unclear if it was fair. What was the point of the first round of revisions? Also, with enough effort and pandering to the reviewers, could the revision phase accidentally cause a bad paper to be accepted?

    Lastly, I think it is important that reviewers assign scores BEFORE seeing other reviews (this was discussed on hunch.net, or perhaps by jl in comments); I think it is important for the first round of reviews to be as close to independent as possible. One review I took part in, a super-famous person showed up, said the paper was terrible, and it was simply impossible to have discussion thereafter. The single person who had liked the paper beforehand (not me) completely wilted. But I felt they made valid points, and the paper was actually rather borderline.

  5. Everything is good except for the two-tiered talk system based on reviewer scores. Reviews rarely contain more than 1 bit (accept/reject) of useful information about the quality of a paper.

  6. My responses to 'The majority of the reviews of your paper were...' is biased. Of my two papers, one seemed to have mostly reasonable reviews.

    One had two reviewers, one of whom did not understand the paper and one of whom appeared to have a vendetta against the research. The 'vendetta' reviewer gave a three sentence review, defending a paper that we pointed out had significant errors, and did not address our feedback at all.

    It might be nice to implement something so that obviously sub-standard reviews--for example, extremely short or abrasive reviews--were disallowed, or noted by area chairs.

  7. I think that the time between decision notification and actual conference is too short (< 2 months). This time it was problematic for me (for a CoLT paper), because my decision to travel was contingent on paper acceptance. 2 months is not enough time to obtain a visa and yet plan the travel properly without making it extremely expensive. No matter where the conference is held, it is always foreign for someone. Also, *all* people holding Chinese & Indian passports (at least probably a few other countries as well), need a visa to travel to most countries. This by no means in my opinion forms a small part of the ICML community, and even if it were small, a small community should not be made to suffer due to organizational issues. Most conferences seem to have somewhere between 3-4 months between paper notification and actual conference (e.g. NIPS has 3 months and 10 days or so)

  8. The two-stage reviewing in the previous year (with two deadlines) worked very well, please bring it back.

    Also:

    having full talks in same timeslots as multiple short talks is a terrible idea. Please don't mix the two: it's the worst of both worlds.

  9. Secondary AC was not helpful Reviewer assignment should be automated (not done by AC)

  10. I strongly dislike the idea of being able to upload a new version of the paper after reviews.

    As a reviewer, it massively increased my workload, as I suddenly had to re-read a number of papers in the very short time between rebuttals and decision (there was no way around doing so, as some authors used the rebuttal form to simply point to lines in the new version where they had supposedly addressed criticism). In the end, none of the uploaded new material changed my mind about a paper.

    As an author, I felt pressured to do like everyone else and upload a new version, even though the reviewers had little concrete comments other than typos. I didn't do so in the end, because I was attending another conference during the rebuttal phase and did not have enough time. My paper got accepted anyway, but I was worried for the entire decision phase that the reviewers would hold this decision against me (as somehow showing a lack of respect for the review process). Even if I had been in the lab during this period, the short time frame would only have sufficed for quick and dirty corrections.

    I much prefer the old deal: Everyone has the same pre-agreed deadline, everyone gets their fair shot at acceptance, and rejected papers then have a real chance of improvement before the next deadline.

  11. Too many papers to review. Please make an effort to keep it low 2-3 at most. We have other review obligations and ICML demands too much from the reviewers with very small additional impact in the quality of the papers.

  12. The reviewing process with several phase, author reply, .... is becoming really complicated and I observed that many PC members get confused, possibly by not carefully reading the instructions. I think the author reply is useful and clear, but the several step reviewing causes confusion

  13. The lack of direct accountability of area chairs remains a problem. ACs can reject (or accept) a paper without any consequences. In some journals like JMLR, the associate editor assigned to a paper is known to the authors. Therefore, all decisions are taken much more carefully. This comment applies to all ML conferences that I know of.

    Of course, this might create pressure on junior faculty but the number of biased decisions could be greatly reduced. Careers (in particularly the ones of junior researchers) are decided on decisions which are opaque.

  14. The reviewing load should be reduced by assigning less papers for each reviewer. Extending the review period will not be helpful since there will still be only a limited time one can allocate to reviewing.

    I think the author response protocol, while it can be helpful in few marginal cases, is costing too much for its value, since it causes too much additional work to authors and reviewers. The amount of time we spend on getting our paper accepted to a conference, and on reviewing papers, is steadily increasing. This comes at the expense of time for research, which is what we all really want to be doing.

  15. I have this unsatisfactory feeling that for a lot of papers, the final decision turns out to be mostly based on chance (this feeling is shared with NIPS).

    From the reviewer point of view, there are so many submitted papers that I tended to assign a lot of weak rejects to help making a choice, since a lot of papers would have to be rejected in the end; only for papers I really thought they had to be accepted did I assign an accept (either weak or not). In retrospect, I wonder whether this was a good choice, and I think if this policy was not shared by most reviewers, it is not a good one.

    From the author point of view, it is obviously always painful to have one's papers rejected. I submitted 3 papers, and had 0 accepted. 2 were heavy in maths, introducing novel ideas, with some experiments, 1 was introducing a novel idea with quite a lot of experiments to assess it. A large majority of reviews were unsatisfatory, either missing completely the point (probably it was stated too unclearly), or showing a lack of background from the reviewer to be able to raise the really good questions. In the end, I find it very difficult to tune a paper for a submission to ICML: most of the times, it is found either too theoretical, or too practical. At the same time, we have the feeling that some (a lot of?) accepted papers should have been blamed as ours...

    ICML and NIPS are victim of their success. I have the feeling that something around 40 % (or more?) of submitted papers are worth being accepted. It is really a waste of time, effort, ... for the whole community to reject so many good papers. The presentation of new results is delayed, this is not good for research, and not good for authors. I think ICML should grow to a larger number of parallel sessions; I totally dislike the idea of having 20+ sessions in parallel as in certain conferences, but at ICML, there are half-days during which no session has any appeal to me. Having one or two extra sessions in parallel, thus accepting more papers, would be interesting.

    Finally, I acknowledge the effort you do this year to try new things to improve the selection process at ICML.

  16. I feel like having an author response is incredibly helpful as both an author and a reviewer. As an author, even though reviews haven't necessarily changed, I feel like the discussion/paper decision seemed to have been influenced by rebuttal. As a reviewer, the responses have helped clarify aspects of a paper and has helped me feel more/less confident about my review. ECML /PKDD did not have rebuttals this year, and as a reviewer I really didn't like it.

    I really liked being able to see the final score the authors gave to the paper. That should be done every year.

    In terms of the review process, I think reviewers should give more credit to looking at new problems/applications in the work. I had one paper that I felt was partially penalized by a reviewer because we only looked at a suite of problems in a new and important application domain but did not run on some standard benchmarks. I think credit should be given for looking at a new application when there is a technical innovation required to analyze that domain.

    I also think that care should be taken to ensure that there is diversity in terms of the papers submitted and the papers accepted. This year, for the first time, I felt like AAAI had more ML papers submitted that were of interest to me than ICML did.

    I also think getting the camera-ready instructions and presentation/poster instruction out a bit earlier would be good.

    I also think it may be worth while to make the submission paper length be 8 pages but give accepted papers a 9th page b/c reviewers usually (correctly) ask for additional detail in the paper.

  17. It would be nice if the reviewers could see the final decisions including meta-reviews!

  18. Note: I only reviewed 1 paper, on request.

  19. secondary meta-reviews added an unreasonable amount of workload, and the instructions regarding them apparently changed during the process

  20. My reviewing experience this year was pure bliss. My frustration when reviewing for ML conferences had been growing for several years.

    This year, I really enjoyed reviewing for ICML. The Toronto matching system-based assignments process certainly made a difference. But I also think the process program chair and area chairs followed during the reviewing period was very well balanced.

    Great job!

  21. I think that John and Joelle did an exceptional job. This was flat out the best reviewing experience I've seen for any conference. Kudos, congratulations, and thanks.

  22. The review period was far too short, in my opinion.

    Also, I reviewed a paper that looked like this: Theorem statements without proof sketches in the main paper, and all proofs in a 20 page supplement. I thought this violated the requirement that the main paper be self-contained, and voted to reject on that basis. Some of my co-reviewers disagreed. (In the end, we all voted to reject for a different reason.) Could the author instructions be revised to clarify this issue?

  23. Tone and quality of meta reviews were particularly bad this year.

  24. It's difficult to deflate author feedback changing my mind on papers versus discussion changing my mind on papers. My mind changed, but it's hard to say which caused it. IMO, author feedback is a good forcing function for discussion, but perhaps not as useful as stats might show.

  25. Reviewers (including me) have a tendency to do the reviews at the last moment possible and this results in cramming all the reviews and lowering the quality. It would be good if the reviewers can be forced to spread out the reviewing load over time. For instance, the two stage reviewing that had been tried in the past years achieved this.

  26. I dislike the trend of allowing authors to change the papers after review. If this is a good idea for getting quality papers, then computer science should move to journals, like every other field. But conferences are a different beast. Reviewing 8 papers and *then* having to look at revisions of them is too much for already taxed reviewers.

  27. Given the mathematical content in many of the papers, it is impossible to review 7 papers adequately in that short time. I would favor papers with theorem proofs be assigned more time for review to people who can actually verify the work. Oftentimes, the math is not commented on because the reviewers don't take/have the time to verify them. Those papers are getting a free pass.

  28. Allowing the authors to upload a new version of their paper is a good idea, however more time should be allocated to read the new paper versions.

    The review process has been getting more and more comprehensive over the years, which is a good thing, but it is quite time consuming for the reviewers. In fact, the review process is now as comprehensive if not more comprehensive than for the journals. Since other fields do not recognize conference publications and often have page limits for their journal publications, could we simply change ICML's publishing format to be a journal instead of proceedings. The journal would simply have have one issue per year and the authors would have the added benefit of presenting their work at ICML.

  29. I strongly support author feedback and think it is a valuable resource for an area chair.

    I would have liked to write more positive things about it as an author as well, but there was absolutely no reaction to it, so I have to assume they just did not read it. This is not a rant because my paper was rejected, it's just how it turned out this year. I've had positive experiences in the past as well (partly also with rejected papers).

    What I would like: authors can rate reviewers (simple 1-5, possibly according to various categories (length of review, etc.)), this feedback is collected over the years, and used for selecting and filitering area chairs and reviewers.

  30. I really really like that every paper has a 20 minute talk, and I'm quite disappointed it's not the case this year...

  31. Allowing full paper resubmission is ok. But reviewers shouldn't be expected to reread a substantially different paper. If the authors didn't have a good, clear submission together in time for the first deadline, that's their problem. Additional stages and complications to the review process rarely make a difference, so isn't a good use of time.

    Having the papers released in two batches encouraged spending more quality time on each paper. That's a good idea.

  32. The question about the quality of the majority of reviews of our paper hides a fair bit of variance -- we had one highly-confident but wrong review, while the others were good. We were pleased to see that the ICML review process was robust to this problem.

  33. The biggest issue is in reviewing is the number of papers per reviewer. Providing thorough reviews for a small number of papers is very doable in the time frame provided this year. I would define 'small' as 2-4. (Roughly 1 per week in review period.) For larger numbers, a reviewer must decide to either: sacrifice time from other commitments, write (some) short reviews based on quick reads, or only review papers on topics they know extremely well (to minimize time on supplemental reading).

    How large a pool of reviewers is needed to reduce the reviewing load from 7 (or 8, in my case) down to 3?

  34. I like the idea of a discussion with a chance to make the reviews public. I feel it can improve the quality of reviews. I would recommend getting the permissions to make reviews public *before* start of the review process.

  35. Allowing substantial changes and uploads of new versions of submissions moves this conference too much towards a journal model. The aim should be to decide whether the snapshot of the work taken at submission time is ready for prime time or not.

    As an area chair, I did not like the procedure of selecting reviewers, as for many of the papers I was responsible for the most appropriate reviewers were at their quota. One other quibble I had was that I do not think the policy of allowing the main AC, sometimes with the help of the secondary, to make the accept decisions is a good idea. Some mechanism for having broader discussions about borderline papers amongst ACs would be good.

  36. If revised papers are going to be allowed during the response period, there need to be more precise and constraining instructions on what edits are allowed. Some authors took this as an opportunity to, essentially, completely rewrite their paper. It is unfair to expect reviewers to evaluate an entirely new manuscript on short notice, and unfair to authors if they don't know what changes they can expect to be evaluated.

  37. Our reviews were of really poor quality more seeking to trash a paper (which apparently has become a routine attitude for reviewers) than coming up with an objective decision. The comments by one reviewer on the theory (stupidly just repeated by the AC) is completely wrong. Other complaints were also completely unreasonable. In fact, it is likely that they are by Tobias Scheffer with whom we have communicated since: this has demonstrated that the comments are ridiculous. Terrible work!

  38. It is normally a big PITA to review 8 papers for ICML, but this year all of the papers were extremely relevant and so I was happy to help with the reviewing process. In general, the system seemed to work quite well and most people that talked to seemed happy with it.

    However, I also had two very negative experiences with the ICML review process this year. One as an author, and one as a reviewer. I believe that both of these experiences would be alleviated if the area chair associated with papers was not anonymous, since there currently seems to be little accountability in the process.

    As an author: Our paper was rejected, in what I believe is the worst decision that I've seen as an author/reviewer (normally I agree with accept/reject decisions, and I at least understand why a particular decision was made, even if I may not agree with it). From our point of view, it seems that the paper was rejected for largely political reasons (perhaps it did not fall along the right party lines or we should not have revealed our identities by placing the paper on arXiv), and we were very unimpressed with how dismissive the area chair was of the work. I have seen this sort of thing at NIPS before, and occasionally at some other conferences, but was very disappointed to see it at ICML.

    As a reviewer: I understood/agreed with the decision made on 7/8 papers that I reviewed. For the remaining paper, I am very unimpressed with the decision made. In the initial submission, the authors claimed a well-known algorithm as a novel contribution and made a variety of blatantly- and provably-false claims. This was pointed out, and with the author response the authors submitted a significantly revised paper, completely changing the abstract and what the claimed contributions of the paper were. Since the new paper seemed to address the reviewer comments on the previous version (and since the reviewers did have time to re-evaluate the completely-different paper during the short discussion paper), the paper was accepted.

    To me, the discussion period either needs to be longer or there need to be limits placed on the extend of the changes that the authors are allowed to make in the resubmission. I really feel like the authors took advantage of the system and were able to publish a paper that, although promising, was not ready for publication and was not properly reviewed in the form submitted.

  39. Organizers could give a price to outstanding reviewers (as in CVPR) to encourage and reward good reviewers.

  40. The Program Chairs did a *fantastic* job this year.

  41. It might help to facilitate a discussion between reviewers early on.

  42. I feel like the review process is getting to selective, with reviewers expecting journal quality papers in an 8-page conference format. It's simply not possible to have full derivations and a full set of experiments with so few pages. However, in the ICML reviewer guidelines it states that 'reviewers are responsible for verifying correctness'. If this is the case then there should be some way to attach a full derivation to the paper that the reviewer can examine. From my understanding, the reviewer is not obligated to look at the supplementary.