International Conference on Machine Learning June 26–July 1, 2012 — Edinburgh, Scotland

« back to conditions

ICML 2012 Survey Results

General comments

  1. I think the author feedback is mostly a bogus exercise. I am new to ICML, but my impression is that the paper in the proceedings is the 'last word' on that work and that most people do not submitted extended versions to JMLR or elsewhere. Based on what I felt was stiff resistance to reasonable criticism of the paper, I should have voted to reject the paper -- however, I thought the ideas were nice so I said 'sure, maybe it will raise some issues.' The review process should be a conversation, not a confrontation, and there should be better instruction to authors and reviewers about what the point of the response period is.

  2. Very strongly - ONLY COLT or UAI should be co-located with ICML. The focus area of ICML (COLT/UAI) is different from other applied conference and I feel co-locating it with any other conference would have more adverse effects than good. If applications need to blended in with ICML, I would suggest having focused workshops in ICML rather than co-locating with an applied conference .

  3. Even though my paper was rejected, I feel that the reviewers raised valid points which I tried to address in my rebuttal and revised version. But I do think it is hard to expect reviewers to re-review the revised version and make decisions in such a short time (post rebuttal phase).

  4. I was disappointed in the reviews. I think many reviews are done by very junior people and the reviewing quality is very variable. I state this as both as someone who submitted a paper and who reviewed for the conference. I suggest not allowing graduate students to review for their faculty member and trying to have more senior people review.

  5. I do not understand why meta-reviewer made the final decision on his own instead of taking other reviewer's comments into consideration. The paper that has received three weak accept opinions was killed by his rudeness. That is quite unusual than other prestigious conferences such as SIGKDD, ICDM, and AAAI. In those conferences, papers were accepted even with two negative feedback if there is a strong positive feedback. I think the quality of ICML'12 is not very good this year in terms of organizing reviewing process. It gives people a bad impression, especially the organizers would not give those rejected paper any chance to rebuttal if the meta reviewers made his decision. That is not fair at all.

  6. I think ICML should have a Doctoral Consortium with some specific objectives. Some of them could be: 1) 5-10 mins presentation (accompanied by a poster in another session) by each candidate describing their work at a high level and its implications. This will serve three purposes: a) help both candidates and potential employers in their job search (academia or industry); b) community will get to know the best of the works in such a short time; c) candidates can communicate and network with each other (exchange of valuable information)

    This is not something new -- Theory conferences like ITCS (http://research.microsoft.com/en-us/um/newengland/events/itcs2012/program.htm) has it in the following name: Graduating bits Finishing Ph.D.'s and Postdoc Short Presentations.

    We could also consider recognizing the Best Ph.D dissertation every year. This is also not new -- conferences like KDD regularly do this.

  7. ICML only accepts a small fraction of papers - it is NOT true that only a small fraction of work going on in the field is worthy of presentation to the community. We are hurting our field by having only a small fraction of work being presented. We are also hurting our field by forcing reviewers to review things not in their area, and providing (often harsh and unjustified) reviews about things they do not know much about. This limits the accepted papers to mainly those for which the set of reviewers is very clear, and papers for which there is a precedent for how to review it. The papers that are really novel or define a new area are almost always rejected. I can give many examples of this just in my own experience, where a paper was either nominated or given a reward or accepted to a major journal in the field, and then rejected from a machine learning conference with poor quality reviews. Unless we figure out a way to accept more papers, ask reviewers not to pass judgement on papers that they truly have no experience working in or little expertise, and have a better assignment of reviewers to papers, we will continue to reject some of the most novel work going on in machine learning today.

  8. It is my first experience in ICML. I am disappointed because the paper I coauthored received 2 accept decision form two reviewers and one weak reject from another reviewer. The reviewer who rejected the our paper clearly specified that he is not an expert in the problem addressed by our paper. In addition he/she clearly mentions that his/her decision (ie, weak reject) is an 'educated guess'. The meta-review rejects the paper siting the main reasons mentioned by the non-expert reviewer. I believe the selectionof the reviewers should be more careful. The fact that the reviewers are volunteering their time should not be a justification about the risk of rejecting even one paper that is worth being accepted.

  9. As a reviewer, 6 papers is at the boundary of being too many for me to do a careful job with each one. I spend 2 to 3 hours per paper on the initial review, and then some additional time on the difficult papers in discussion. I feel that this is a lot of time overall, but at the same time I feel it is the bare minimum time necessary for me to do a reasonable job on the reviews.

    I come from operations research, rather than computer science, and a large part of my reviewing is for journal papers within operations research and statistics. When reviewing for those venues I am given one paper and several weeks (even as long as three months) to review it. I generally spend at least 4 hours on each paper in these venues. Now, conference papers are shorter and thus somewhat easier to review than journal papers, and I understand that this reviewing load is typical for computer science program committees, but nevertheless, my feeling is that having fewer papers per reviewer would cause the quality of the reviews to improve.

  10. Naturally, for a conference of this size, I have no expectation that reviewers will read my feedback. I was happy that the meta-reviewer read our feedback, and seemed to support our paper and agree with our arguments, but unfortunately they still had to reject it. Based on this experience, I'd be in favor of skipping response to the reviewers, and having a (much shorter) period where I could respond directly to the area chair. That way, they could have more autonomy to reverse the decisions of the reviewers given a strong rebuttal. Also, I wouldn't have to pander to less-than-stellar reviewers.

    I also didn't like the format of the response. 4000 characters meant that I had to waste my entire rebuttal explaining the paper to the reviewer who hated it so much that he didn't bother to read it, and I had no space left to respond to the legitimate questions of more favorable reviewers. Perhaps the character limit should be per-reviewer, rather than for the entire rebuttal?

  11. Comments on the review process:

    Though my paper was rejected, I find the reviewers are reasonable. It is very impressed that ICML this year has the rebuttal from the reviewers, giving me confidence that some (2 out of 3) reviewers did bother reading my rebuttals.

    A reviewer asked me to compare with other (not very relevant) techniques and given 1 week time frame, it was practically impossible to do so. So I wish future ICMLs will extend the rebuttal period.

    General comments for improvement:

    I have always admired the transparency of ICMLs though I've never got a paper accepted. But here is a small suggestion, which I hope it will be looked at. My point is that no matter how you try to improve the review process, there will always be good papers being rejected and non-outstanding, even flaw or inferior, papers being accepted. Here I provide an example of my colleague's comment on an accepted ICML paper this year

    http://atpassos.posterous.com/icml-2012-reading-list

    Essentially, a paper titled `A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods with Fast Maximum Entropy Training' was accepted to ICML 2012. This paper lacks a survey of a related methods in undirected graphical models and it did not bother to evaluate on data size of today's standard. I quote my colleague's comment

    'But I'm shocked that the authors didn't even search for existing papers in this area of undirected graphical models for collaborative filtering. At least keywords like 'Markov random fields', 'Boltzmann machines', 'Markov networks', 'Markov relational networks' should have been tried. But they didn't. And the reviewers did not catch that either.

    A terrible thing is the experiment itself, today you can run with 100M ratings easily with standard laptops, and here they run with 1M! In 2007, the AusDM reviewers did complain about the data size when we try the 100K set, and the NIPS reviewers didn't even care about the contributions.'

    Social networking technology is being popular nowadays, and I strongly believe that a common place for people to point out related works to accepted papers, make comments, so that other researchers, especially PhD students have a better perspective of an accepted paper despite all possible unavoidable limitations by the review process.

    Thanks again for conducting the survey.

  12. This conference was extremely well executed.

    I felt 'taken care of' and that there were safety nets for poor decisions at many levels.

    My only complaint is (and perhaps this is my mistake?) that the ability to upload revisions was not well publicized. so some people that coincidentally had more time that week were able to do so, other papers not. similarly, some papers had reviewers that read that stuff, some did not.

    In general, I think rebuttals are excellent (can clarify gross misunderstandings, which are inevitable due to time constraints), but that revisions are strange; in particular, it is vague how much they should matter. For one paper I reviewed, the revisions resulted in an almost entirely new paper. It was frustrating, demanded a lot of time, and it was unclear if it was fair. What was the point of the first round of revisions? Also, with enough effort and pandering to the reviewers, could the revision phase accidentally cause a bad paper to be accepted?

    Lastly, I think it is important that reviewers assign scores BEFORE seeing other reviews (this was discussed on hunch.net, or perhaps by jl in comments); I think it is important for the first round of reviews to be as close to independent as possible. One review I took part in, a super-famous person showed up, said the paper was terrible, and it was simply impossible to have discussion thereafter. The single person who had liked the paper beforehand (not me) completely wilted. But I felt they made valid points, and the paper was actually rather borderline.

  13. I have tried my best to submit the papers,,including a full paper,, supplementary material (a demon of software) and revised paper during author response,, but it is so disappointed that all these efforts have no use for that...

  14. My submission was in learning theory. The reviewers appeared not to be. The paper was accepted in the end, so perhaps its all acceptable noise.

    With online proceedings, follow the lead of (eg. COLT) and allow arbitrary length appendices. By all means impose a rigid 8 page limit on the main body of the paper.

  15. Everything is good except for the two-tiered talk system based on reviewer scores. Reviews rarely contain more than 1 bit (accept/reject) of useful information about the quality of a paper.

  16. My paper wasn't accepted, but I felt like the reviews were not as hostile as I have had at some other conferences. This may just be random chance, but I appreciate the comments that I received.

    There was certainly some misunderstanding about what our paper was trying to accomplish, but overall I agreed with the reviewers criticisms.

  17. My paper was rejected because according to the reviewers the idea was new and interesting but was not conclusively proved (there was no time for computational experimentation). I think this is a poor rationale for a conference review process. This is the last time I submit for ICML, as i think that the reviewer pool is all over the map.

  18. My responses to 'The majority of the reviews of your paper were...' is biased. Of my two papers, one seemed to have mostly reasonable reviews.

    One had two reviewers, one of whom did not understand the paper and one of whom appeared to have a vendetta against the research. The 'vendetta' reviewer gave a three sentence review, defending a paper that we pointed out had significant errors, and did not address our feedback at all.

    It might be nice to implement something so that obviously sub-standard reviews--for example, extremely short or abrasive reviews--were disallowed, or noted by area chairs.

  19. I think that the time between decision notification and actual conference is too short (< 2 months). This time it was problematic for me (for a CoLT paper), because my decision to travel was contingent on paper acceptance. 2 months is not enough time to obtain a visa and yet plan the travel properly without making it extremely expensive. No matter where the conference is held, it is always foreign for someone. Also, *all* people holding Chinese & Indian passports (at least probably a few other countries as well), need a visa to travel to most countries. This by no means in my opinion forms a small part of the ICML community, and even if it were small, a small community should not be made to suffer due to organizational issues. Most conferences seem to have somewhere between 3-4 months between paper notification and actual conference (e.g. NIPS has 3 months and 10 days or so)

  20. One day or half day should be really added.

  21. Although I was really excited about the author response and option to enhance / edit the paper based on the reviewers' feedback, I was highly disappointed when I found out that the reviewers didn't read the updated version (in which all their comments / concerns were addressed), and didn't even bother to update their review to match the last version of the paper. I ended up with reviews that raised comments and points already addressed in the latest version of the paper, and referring to discussions and typos already fixed. If I knew that the updated version will not be considered, I wouldn't have spent much time on running new experiments and adding whole sections of the paper to address the reviewers comments and feedback

  22. Basically the reason I feel that the author responses were meaningless is because the final decision justification was based on reviewer comments that were actually fixed. In fact, the errors in the paper were found prior to receiving the reviewer responses and new proofs had already been derived in anticipation of them. The decision rationale claimed that the paper had errors when in fact the revised version had none. If there is a response period, then at least the responses should be read. I figure that the committee performs a triage on the papers and views only some of the responses, but not all. Basically those borderline papers that eek it into the top 242.

    If that's the case, then maybe responses should be denied to those that won't ever be looked at again. Why put the authors through the trouble? Our egos can handle an outright reject (it's part of being in academia)

  23. I think AAAI and IJCAI are already such long experiences, that it would be hard to colocate with them.

    Uploading revised papers puts too much load on the reviewers.

  24. I felt that the reviewers pretty much disregarded both of our author replies, even though there were factual errors (about the supposed similarity of another paper, which is actually a quite different approach and not directly comparable, as we explained in the reply; and about the 'questionableness' of the results, which seemed to result from a failure on the reviewer's part to understand the different expected behavior of training data and test data). It was very frustrating and basically seemed like a complete waste of time to even bother with the author response.

    In general, I think that ICML has devolved to primarily accepting a certain type of paper. Incremental improvements to classifiers, particularly very mathematical approaches like manifold and kernel methods, are good. Applications papers, papers that explore different aspects of machine learning (especially when no standard benchmarks exist), papers about machine learning in context (robotic learning, goal-directed learning, lifelong learning) are bad. The conference has become more and more 'NIPS-like' and shows little sign of being willing to change direction. I have pretty much given up attending the conference but thought I'd give it another try this year. I think it is really unfortunate that the ML community seems to have become so narrowly focused, and there doesn't seem to be any real leadership behind trying to broaden the set of 'acceptable' topics/problems/methods.

  25. Although they could not bring any heartquake, the works on less audacious tasks have more chance. Why must we modify the old method? Why must we link with existing methods, while introduce a new method to solve a open problem? While one thing can be clearly expression by theory, why must use experiments to demonstrate it? The reviewer can say he do not know the background, but he can say'strong reject'. What a 'good' logic!

  26. I'm looking forward to ICML and appreciate the opportunities it presents. I hope to have some very interesting conversations and collaborations.

    With regard to reviews: One of our reviewers seems not to have actually read the paper, as their comments were surface-level (from the opening sentences of the introduction and results) and tangential (including references to literature which did not relate to our paper). I'm not sure if this is because we selected topic areas poorly and we were assigned someone who didn't know where to start, or perhaps because that reviewer simply found our topic not worth their time. While I can't know that reviewer's state of mind, I believe that a different review would have been much more helpful to me as an author, and to our area chairs, who ultimately needed to decide whether or not to accept our paper. Fortunately for us, the paper was in fact accepted, but I also wonder if that review / some aggregation of reviews influenced our selection for a short talk, vs. a long-format talk.

  27. I would like to see all accepted papers as full presentations.

  28. There is considerable variance in the reviews that this form fails to capture. In the case of one of my papers, the reviewers did not understand the paper, while for some others they understood most of it. I don't blame the reviewers, but I tend to blame the area chairs for not pushing for better reviews and for making sure that the reviewers do understand the papers. However, this is not to say I was very disappointed with the reviews in general.

  29. Regarding the question - The majority of the reviews of your paper were:

    more appropriate answer is that it seems one of the reviewers had something else in mind while reading the problem statement. Hence, i feel that he did not understood the whole paper.

    But I am satisfied with the other two reviewers comments.

  30. Reviewing at machine learning conferences has become pure nit-picking. Papers with big ideas are rejected while picture-perfect incremental papers (that get us nowhere as a field but are thorough) get accepted. Also, it's all about fads (nonparametric Bayes, deep belief nets, regret minimization, etc.). It's the emperor's new clothes and nobody will say or do anything until it's too late and another community takes over.

  31. The two-stage reviewing in the previous year (with two deadlines) worked very well, please bring it back.

    Also:

    having full talks in same timeslots as multiple short talks is a terrible idea. Please don't mix the two: it's the worst of both worlds.

  32. Secondary AC was not helpful Reviewer assignment should be automated (not done by AC)

  33. The reviewes were technically correct, but we felt that they somehow missed the point of the paper. Also, we felt that there could have been some hind-sight bias (our approach and results may seem straight-forward, but only in hind sight). Our paper was also mostly theoretical and we made clear about that but we were criticized for not including extensive experimental work.

  34. Rejecting papers with an average score of 3.0 (compare http://hunch.net/?p=2517), i.e. weak accept, is quite tough and contradicts in my opinion the idea of 'giving controversial papers a chance'. In general, my impression is that the meta-reviewer have too much impact when they are allowed to reject papers that expert reviewer rated with 'strong accept' and 'weak accept'. You can see, I am a bit frustrated; but otherwise the revieweing process was examplary good.

  35. The process for new versions of the paper uploaded during the rebuttal period should be clarified. The amount of time is often too brief for preparing a fully revised paper, and it is very hard to estimate how a partly revised paper (say, a paper where one critical aspect has been corrected but that does not directly respond to all review comments) will be taken by the reviewers.

    The instructions for the new uploads should be clarified for both the authors and the reviewers, so that everyone would interpret the new versions in the same light. Perhaps now some reviewers already expect the authors to prepare a full new version and consider the rebuttal invalid unless it is accompanied by one. At the same time, some reviewers might not even look at the new version.

    Some alternative practical suggestions: 1) Make the rebuttal time 1-2 weeks longer and make revised versions obligatory. 2) Allow reviewers or area chairs to request revised versions if they consider it necessary for borderline cases. Then the authors would know that it pays off to do the extra work despite the brief time. 3) Allow the authors to choose what they intend to tell with the new version. Something like a choice between 'this is a fully revised version that would be the camera-ready' and 'this version clarifies one aspect of the paper as explained in the rebuttal, but would still be modified to take the other comments into account'. The authors can already do so by writing the same information in the rebuttal, but the instructions could explicitly mention this.

    Some statistics on how many authors submitted revised versions (and whether this correlates somehow with the acceptance) would also be nice. As the system is new in the field, it would pay off to provide maximal amount of information for all parties.

  36. I strongly dislike the idea of being able to upload a new version of the paper after reviews.

    As a reviewer, it massively increased my workload, as I suddenly had to re-read a number of papers in the very short time between rebuttals and decision (there was no way around doing so, as some authors used the rebuttal form to simply point to lines in the new version where they had supposedly addressed criticism). In the end, none of the uploaded new material changed my mind about a paper.

    As an author, I felt pressured to do like everyone else and upload a new version, even though the reviewers had little concrete comments other than typos. I didn't do so in the end, because I was attending another conference during the rebuttal phase and did not have enough time. My paper got accepted anyway, but I was worried for the entire decision phase that the reviewers would hold this decision against me (as somehow showing a lack of respect for the review process). Even if I had been in the lab during this period, the short time frame would only have sufficed for quick and dirty corrections.

    I much prefer the old deal: Everyone has the same pre-agreed deadline, everyone gets their fair shot at acceptance, and rejected papers then have a real chance of improvement before the next deadline.

  37. The author response period should be at least one week.

  38. The area chair and meta-reviewer reversed the reviewer decisions. He rejected a paper that was recommended for acceptance by two reviewers - one strong accept and one weak accept, and with weak reject by the third one. To support his decision, the area chair had obtained two very weak and disappointing meta-reviews, that repeated a few of the lame claims of the weak reject reviewer, and clearly haven't read the paper.

    As an area chair in NIPS, CVPR and ECCV, I know those conferences discourage such such single-handed decision based on the area chair own opinion, without having read the paper carefully.

  39. Too many papers to review. Please make an effort to keep it low 2-3 at most. We have other review obligations and ICML demands too much from the reviewers with very small additional impact in the quality of the papers.

  40. The reviewing process with several phase, author reply, .... is becoming really complicated and I observed that many PC members get confused, possibly by not carefully reading the instructions. I think the author reply is useful and clear, but the several step reviewing causes confusion

  41. The lack of direct accountability of area chairs remains a problem. ACs can reject (or accept) a paper without any consequences. In some journals like JMLR, the associate editor assigned to a paper is known to the authors. Therefore, all decisions are taken much more carefully. This comment applies to all ML conferences that I know of.

    Of course, this might create pressure on junior faculty but the number of biased decisions could be greatly reduced. Careers (in particularly the ones of junior researchers) are decided on decisions which are opaque.

  42. The metareviews were factually incorrect - rather than being a summary, they explicitly added new conclusions, which were erroneous, and to which there is no mechanism for response. This seems arbitrary at best, and somewhat lazy at worst.

  43. The reviews in other areas of computer science (like theoretical cs) are much more thorough and correct.

  44. I think it would be good to require a new review if reviewers declare to have considered the revised paper after author response. In my case none of the reviews changed at all but two of three reviewers were marked as having considered the revision. This way, as an author, I get the feeling that the response process was not taken serious, even though I put work in addressing the reviewers remarks.

  45. The reviewing load should be reduced by assigning less papers for each reviewer. Extending the review period will not be helpful since there will still be only a limited time one can allocate to reviewing.

    I think the author response protocol, while it can be helpful in few marginal cases, is costing too much for its value, since it causes too much additional work to authors and reviewers. The amount of time we spend on getting our paper accepted to a conference, and on reviewing papers, is steadily increasing. This comes at the expense of time for research, which is what we all really want to be doing.

  46. I have this unsatisfactory feeling that for a lot of papers, the final decision turns out to be mostly based on chance (this feeling is shared with NIPS).

    From the reviewer point of view, there are so many submitted papers that I tended to assign a lot of weak rejects to help making a choice, since a lot of papers would have to be rejected in the end; only for papers I really thought they had to be accepted did I assign an accept (either weak or not). In retrospect, I wonder whether this was a good choice, and I think if this policy was not shared by most reviewers, it is not a good one.

    From the author point of view, it is obviously always painful to have one's papers rejected. I submitted 3 papers, and had 0 accepted. 2 were heavy in maths, introducing novel ideas, with some experiments, 1 was introducing a novel idea with quite a lot of experiments to assess it. A large majority of reviews were unsatisfatory, either missing completely the point (probably it was stated too unclearly), or showing a lack of background from the reviewer to be able to raise the really good questions. In the end, I find it very difficult to tune a paper for a submission to ICML: most of the times, it is found either too theoretical, or too practical. At the same time, we have the feeling that some (a lot of?) accepted papers should have been blamed as ours...

    ICML and NIPS are victim of their success. I have the feeling that something around 40 % (or more?) of submitted papers are worth being accepted. It is really a waste of time, effort, ... for the whole community to reject so many good papers. The presentation of new results is delayed, this is not good for research, and not good for authors. I think ICML should grow to a larger number of parallel sessions; I totally dislike the idea of having 20+ sessions in parallel as in certain conferences, but at ICML, there are half-days during which no session has any appeal to me. Having one or two extra sessions in parallel, thus accepting more papers, would be interesting.

    Finally, I acknowledge the effort you do this year to try new things to improve the selection process at ICML.

  47. I feel like having an author response is incredibly helpful as both an author and a reviewer. As an author, even though reviews haven't necessarily changed, I feel like the discussion/paper decision seemed to have been influenced by rebuttal. As a reviewer, the responses have helped clarify aspects of a paper and has helped me feel more/less confident about my review. ECML /PKDD did not have rebuttals this year, and as a reviewer I really didn't like it.

    I really liked being able to see the final score the authors gave to the paper. That should be done every year.

    In terms of the review process, I think reviewers should give more credit to looking at new problems/applications in the work. I had one paper that I felt was partially penalized by a reviewer because we only looked at a suite of problems in a new and important application domain but did not run on some standard benchmarks. I think credit should be given for looking at a new application when there is a technical innovation required to analyze that domain.

    I also think that care should be taken to ensure that there is diversity in terms of the papers submitted and the papers accepted. This year, for the first time, I felt like AAAI had more ML papers submitted that were of interest to me than ICML did.

    I also think getting the camera-ready instructions and presentation/poster instruction out a bit earlier would be good.

    I also think it may be worth while to make the submission paper length be 8 pages but give accepted papers a 9th page b/c reviewers usually (correctly) ask for additional detail in the paper.

  48. As the author of a rejected paper anything we say may be interpreted the wrong way; but we did find it odd that mr1 indicated reviewers had agreed to accept and then mr2 rejected based largely on what seemed to us to be subjective opinion. Perhaps this should be better reconciled?

  49. I cannot tell whether the rebuttal or paper submission was considered by reviewers. You should ask the reviewers that.

  50. The reviews were frankly disgustingly ignorant. The anonymity offered by the process seems to bring out the worst in the egos of the reviewers with a combination of petty pointless nitpicking masking ignorance with a total lack of any kind of scholarly humility. I guess this rant can be construed as me being bitter, but this process is disillusioning and only seems to promote a dangerous kind of homophily in ML research.

    I strongly urge a move towards removing anonymity. The purported benefit of it i.e. removing intimidation of big names is an insidious concept to protect.

  51. Some authors didn't read the author response. If they don't intend to do so, please explicitly indicate in the feedback.

  52. It would be nice if the reviewers could see the final decisions including meta-reviews!

  53. Incentives should be given to reviewers for reading authors' rebuttals and take them into account. Incentives should be given to reviewers to promote diversity in topics and methods of submitted papers.

  54. Note: I only reviewed 1 paper, on request.

  55. - Authors should also have the ability to judge the quality of the reviews. - Actions should be taken against reviewers that mainly decide for acceptance or rejection based on a proper comparison with their own work, even if that work is not relevant.

  56. secondary meta-reviews added an unreasonable amount of workload, and the instructions regarding them apparently changed during the process

  57. I like the opportunity to have a look at all the papers during the evening poster sessions.

  58. On three reviews, I would like to mention that I had a 'Right to the point' review.

  59. I think it would be nice to leave more time for authors' rebuttal phase. This year there were only 3 days, during which the authors needed to prepare the rebuttal(s) and update their paper(s); can be hard to finish on time for people who co-author more than one submission.

  60. The 3 initial reviews were fair and I provided detailed answer to the questions the reviewers raised. However, a fourth reviewer was added after the rebuttal:

    this reviewer did not understand the paper at all and barely read it, contrary to the other reviewers, and since it was after the rebuttal, I could not even provide feedback.

    I think this is really bad to add additional reviewers after the rebuttal without having the possibility to give feedback.

    Also, since the rebuttal period is very short, I do not see the point in enabling authors to provide a revised version of the paper, after taking into accounts the reviewers feedback: either the change is marginal and then there is no reason to provide such, or it needs more care and then this would require more time (I think somewhere between 10 days and 2 weeks should be ok).

  61. My reviewing experience this year was pure bliss. My frustration when reviewing for ML conferences had been growing for several years.

    This year, I really enjoyed reviewing for ICML. The Toronto matching system-based assignments process certainly made a difference. But I also think the process program chair and area chairs followed during the reviewing period was very well balanced.

    Great job!

  62. I wish I'd written a better paper. Comments were somewhat helpful. I couldn't tell if there wasn't much appreciation for theoretical contributions or if the reviewers missed the relevance. Cheers.

  63. I think that John and Joelle did an exceptional job. This was flat out the best reviewing experience I've seen for any conference. Kudos, congratulations, and thanks.

  64. The review period was far too short, in my opinion.

    Also, I reviewed a paper that looked like this: Theorem statements without proof sketches in the main paper, and all proofs in a 20 page supplement. I thought this violated the requirement that the main paper be self-contained, and voted to reject on that basis. Some of my co-reviewers disagreed. (In the end, we all voted to reject for a different reason.) Could the author instructions be revised to clarify this issue?

  65. Tone and quality of meta reviews were particularly bad this year.

  66. It's difficult to deflate author feedback changing my mind on papers versus discussion changing my mind on papers. My mind changed, but it's hard to say which caused it. IMO, author feedback is a good forcing function for discussion, but perhaps not as useful as stats might show.

  67. The reviewers of my papers highlighted the drawbacks much more than the novel contributions -- in fact they tried to completely ignore the benefit by either not mentioning it or by belittling the experimental results. I understand the Program Chairs can do little about reviewer selection and in some community (e.g., the one pertaining to my submitted paper) are very protective about accepting new ideas, nonetheless the following measures can be helpful:

    1. Instead of just checking a radio button that the reviewer read the rebuttal, it should be mandatory for them to summarize the rebuttal and the changes/improvements reported in a new version, if any.

    2. Reviewers MUST provide suggestions how to improve (or how to present) the manuscript so that it gets accepted and/or point to the appropriate venue.

    3. In case of a resubmission, from AISTAT or NIPS, the paper should get a new set of reviewers so that some of them are responsible and will not judge it subjectively. Indeed, if a reviewer rejects a paper once, he/she is more probable to reject it again than others.

  68. Outstanding transparency of the whole process this year. THANKYOU.

  69. Reviewers (including me) have a tendency to do the reviews at the last moment possible and this results in cramming all the reviews and lowering the quality. It would be good if the reviewers can be forced to spread out the reviewing load over time. For instance, the two stage reviewing that had been tried in the past years achieved this.

  70. more reviewers that understand the basics of bayesian statistics!

  71. I dislike the trend of allowing authors to change the papers after review. If this is a good idea for getting quality papers, then computer science should move to journals, like every other field. But conferences are a different beast. Reviewing 8 papers and *then* having to look at revisions of them is too much for already taxed reviewers.

  72. 'The majority of the reviews of your paper were:'

    has a more complicated answer than the multiple choice above allows: 1 low quality, supremely focused on citing some prior work that was somewhat off topic--- I don't think he understood why it was off topic. Didn't address my rebuttal why. 1 high quality, including helpful suggestions. 1 didn't quite understand the paper

    A couple reviewers criticized that the methods I tested for my new problem setting were fairly straightforward / incremental. The degree to which novelty is promoted over more straightforward/sensible methods may lead to the conference proceedings being full of complicated, novel methods---biased away from the sensible/simple methods that (a) need to be tested/compared for the sake of science, and (b) are often more robust and reproducible.

    Recall multiple studies that tried to reproduce a variety of published methods and found that they rarely performed as advertised on different datasets. I've experienced it myself with 6 'fast, high-quality' text clustering methods, as well as with some other methods. This is the elephant in the room.

    Reviewers should constantly push to get authors to release datasets and *code*, so that others can reproduce as well as compare new methods against these supposedly good baselines.

  73. (1) I wish our paper had been revised by someone not so biased against C4.5 (the algorithm we chose to compare ours with).

    (2) I do not think the paper was carefully read by two out of the three reviewers.

    (3) Our answers to their comments were ignored. Particularly, one of the reviewers seemed to have no idea about what he was talking about in his review.

    (4) Before submitting to ICML I had a good look at past ICML proceedings. Honestly, compared to several (from the past editions) ours could confortably be placed among those accepted.

    (5) We were invited to submit our paper to a Whorkshop. We did and it was accepted. However, as the Workshop will only have an electronically available proceedings, we will not be able to get any financial support from Brazilian Funding Agencies and, for this reason, we gave up submitting the final version and as consequence, will not be attending the event.

  74. Given the mathematical content in many of the papers, it is impossible to review 7 papers adequately in that short time. I would favor papers with theorem proofs be assigned more time for review to people who can actually verify the work. Oftentimes, the math is not commented on because the reviewers don't take/have the time to verify them. Those papers are getting a free pass.

  75. Allowing the authors to upload a new version of their paper is a good idea, however more time should be allocated to read the new paper versions.

    The review process has been getting more and more comprehensive over the years, which is a good thing, but it is quite time consuming for the reviewers. In fact, the review process is now as comprehensive if not more comprehensive than for the journals. Since other fields do not recognize conference publications and often have page limits for their journal publications, could we simply change ICML's publishing format to be a journal instead of proceedings. The journal would simply have have one issue per year and the authors would have the added benefit of presenting their work at ICML.

  76. I strongly support author feedback and think it is a valuable resource for an area chair.

    I would have liked to write more positive things about it as an author as well, but there was absolutely no reaction to it, so I have to assume they just did not read it. This is not a rant because my paper was rejected, it's just how it turned out this year. I've had positive experiences in the past as well (partly also with rejected papers).

    What I would like: authors can rate reviewers (simple 1-5, possibly according to various categories (length of review, etc.)), this feedback is collected over the years, and used for selecting and filitering area chairs and reviewers.

  77. It is not normal that certain reviewers accept to review some papers that are not in their field.

    Some reviewers ask that certain American papers and probably their papers are cited while other non-American papers are completely ignored. You see a lot of ICML papers which take earlier works in europe that are published.

    How the program of committe is constitued ? A renewal and a call to be a member must be offered each year.

    Thanks

  78. I really really like that every paper has a 20 minute talk, and I'm quite disappointed it's not the case this year...

  79. The paper matching system seems work well.

  80. This was my first time submitting to ICML. I felt the 2 out of 3 reviewers were really brief (the community I come from, short negative reviews without constructive feedback are considered an insult, only positive reviews are permitted to be short, and this rule is diligently enforced). The reviews lacked justification for what they said. It is important that reviewers take up the task of carefully justifying their thoughts, especially if they are negative, and providing constructive feedback. Given all the work authors put in, few-line reviews gives an outsider the impression that the entire community is flaky; and that people do not take being on the TPC seriously.

  81. We were very happy that our paper was accepted, but had a very poor sense of what the reviewers discussed after receiving revisions. The reviews listed on CMT at the end were unchanged from before, but the meta reviews hinted that the reviewers were 'very pleased' with the revisions. Furthermore, it wasn't clear if the accept/reject decisions were the reviewers' initial response or the response after revisions. In general, more transparency would be helpful for understand what was or wasn't well received.

  82. Allowing full paper resubmission is ok. But reviewers shouldn't be expected to reread a substantially different paper. If the authors didn't have a good, clear submission together in time for the first deadline, that's their problem. Additional stages and complications to the review process rarely make a difference, so isn't a good use of time.

    Having the papers released in two batches encouraged spending more quality time on each paper. That's a good idea.

  83. The question about the quality of the majority of reviews of our paper hides a fair bit of variance -- we had one highly-confident but wrong review, while the others were good. We were pleased to see that the ICML review process was robust to this problem.

  84. During the author response it would be nice to see the reviewer's scores of the paper. For my paper it seemed that the reviewer's comments (mostly positive) did not correlate well with the reviewer's scores (all weak rejects).

  85. It's unfortunate that the author response is not read by some reviewers, but their opinion still used by the meta-reviewers.

  86. The biggest issue is in reviewing is the number of papers per reviewer. Providing thorough reviews for a small number of papers is very doable in the time frame provided this year. I would define 'small' as 2-4. (Roughly 1 per week in review period.) For larger numbers, a reviewer must decide to either: sacrifice time from other commitments, write (some) short reviews based on quick reads, or only review papers on topics they know extremely well (to minimize time on supplemental reading).

    How large a pool of reviewers is needed to reduce the reviewing load from 7 (or 8, in my case) down to 3?

  87. I like the idea of a discussion with a chance to make the reviews public. I feel it can improve the quality of reviews. I would recommend getting the permissions to make reviews public *before* start of the review process.

  88. Two of the reviewers didn't understand my paper and so rejected it because they believed it had TECHNICAL ISSUES!!! I rewrote the whole theory section (3 pages) to help them understand it, and finally they didn't read the re-submitted paper, and rejected the old one again! The most interesting part was that, there was a question for them after the rebuttal phase 'I have read and considered the authors' response. (To be answered only after the response period.)', and BOTH OF THEM ANSWERED NO!

    1- I really prefer when someone considers himself as an EXPERT in machine learning, at least has the ability to follow some linear algebra! ... apparently they couldn't ...

    2- I'm even OK if the reviewers do not read the resubmitted papers!!! BUT PLEASE remove this stupid question 'I have read and considered the authors' response.'; Is it there just to humiliate the authors?!? For me it was like 'I had rejected you, and I'm telling you that I didn't want to consider your answer, so I'm still rejecting you!'

  89. Allowing substantial changes and uploads of new versions of submissions moves this conference too much towards a journal model. The aim should be to decide whether the snapshot of the work taken at submission time is ready for prime time or not.

    As an area chair, I did not like the procedure of selecting reviewers, as for many of the papers I was responsible for the most appropriate reviewers were at their quota. One other quibble I had was that I do not think the policy of allowing the main AC, sometimes with the help of the secondary, to make the accept decisions is a good idea. Some mechanism for having broader discussions about borderline papers amongst ACs would be good.

  90. If revised papers are going to be allowed during the response period, there need to be more precise and constraining instructions on what edits are allowed. Some authors took this as an opportunity to, essentially, completely rewrite their paper. It is unfair to expect reviewers to evaluate an entirely new manuscript on short notice, and unfair to authors if they don't know what changes they can expect to be evaluated.

  91. - Reviewers should remain (and should be enforced to remain) professional when writing reviews. There is no need to write that a paper is 'trivial' or 'boring' multiple times in a review.

    - ICML should be more open to accepting more mathematically-inclined papers. Reviewers should not be scared away/turned off by abstract mathematical formalism.

  92. The reviewers seemed mostly to like the paper: weak reject, weak accept and strong accept. In the rebuttal we provided clarifications to the few concerns raised by the reviews. The two meta-reviewers seemed not to like the paper, with only very brief remarks and no clear justification for rejection given the previews positive reviews.

    Given the positive initial reviews we got we would have expected a more detailed account by the meta-reviewers raising major problems with the paper.

  93. Our reviews were of really poor quality more seeking to trash a paper (which apparently has become a routine attitude for reviewers) than coming up with an objective decision. The comments by one reviewer on the theory (stupidly just repeated by the AC) is completely wrong. Other complaints were also completely unreasonable. In fact, it is likely that they are by Tobias Scheffer with whom we have communicated since: this has demonstrated that the comments are ridiculous. Terrible work!

  94. The most negative review never bothered to read our response, which is insulting! The chairs should make sure that at least the reviewers press the 'I read' button.

    The feedback meta-review was confusing, since after the first round itr said 'probable accept' and the final outcome was 'reject' where the only additional input was our response !

  95. It is normally a big PITA to review 8 papers for ICML, but this year all of the papers were extremely relevant and so I was happy to help with the reviewing process. In general, the system seemed to work quite well and most people that talked to seemed happy with it.

    However, I also had two very negative experiences with the ICML review process this year. One as an author, and one as a reviewer. I believe that both of these experiences would be alleviated if the area chair associated with papers was not anonymous, since there currently seems to be little accountability in the process.

    As an author: Our paper was rejected, in what I believe is the worst decision that I've seen as an author/reviewer (normally I agree with accept/reject decisions, and I at least understand why a particular decision was made, even if I may not agree with it). From our point of view, it seems that the paper was rejected for largely political reasons (perhaps it did not fall along the right party lines or we should not have revealed our identities by placing the paper on arXiv), and we were very unimpressed with how dismissive the area chair was of the work. I have seen this sort of thing at NIPS before, and occasionally at some other conferences, but was very disappointed to see it at ICML.

    As a reviewer: I understood/agreed with the decision made on 7/8 papers that I reviewed. For the remaining paper, I am very unimpressed with the decision made. In the initial submission, the authors claimed a well-known algorithm as a novel contribution and made a variety of blatantly- and provably-false claims. This was pointed out, and with the author response the authors submitted a significantly revised paper, completely changing the abstract and what the claimed contributions of the paper were. Since the new paper seemed to address the reviewer comments on the previous version (and since the reviewers did have time to re-evaluate the completely-different paper during the short discussion paper), the paper was accepted.

    To me, the discussion period either needs to be longer or there need to be limits placed on the extend of the changes that the authors are allowed to make in the resubmission. I really feel like the authors took advantage of the system and were able to publish a paper that, although promising, was not ready for publication and was not properly reviewed in the form submitted.

  96. My only disappointment is regarding the assignment of durations to the talks. From the reviews we had (three strong accepts), we were quite confident that the paper would have a 20 minute slot. However it ended up with a 5 minute one. If there were some additional hidden criteria, it would have been nice to know about them. Otherwise, from outside it seems like the 5 minute talks were given to the borderline papers.

  97. Organizers could give a price to outstanding reviewers (as in CVPR) to encourage and reward good reviewers.

  98. The Program Chairs did a *fantastic* job this year.

  99. It might help to facilitate a discussion between reviewers early on.

  100. I feel like the review process is getting to selective, with reviewers expecting journal quality papers in an 8-page conference format. It's simply not possible to have full derivations and a full set of experiments with so few pages. However, in the ICML reviewer guidelines it states that 'reviewers are responsible for verifying correctness'. If this is the case then there should be some way to attach a full derivation to the paper that the reviewer can examine. From my understanding, the reviewer is not obligated to look at the supplementary.