International Conference on Machine Learning June 26–July 1, 2012 — Edinburgh, Scotland

« back to conditions

ICML 2012 Survey Results

General comments

Conditioned on questionee being an author

  1. Very strongly - ONLY COLT or UAI should be co-located with ICML. The focus area of ICML (COLT/UAI) is different from other applied conference and I feel co-locating it with any other conference would have more adverse effects than good. If applications need to blended in with ICML, I would suggest having focused workshops in ICML rather than co-locating with an applied conference .

  2. Even though my paper was rejected, I feel that the reviewers raised valid points which I tried to address in my rebuttal and revised version. But I do think it is hard to expect reviewers to re-review the revised version and make decisions in such a short time (post rebuttal phase).

  3. I was disappointed in the reviews. I think many reviews are done by very junior people and the reviewing quality is very variable. I state this as both as someone who submitted a paper and who reviewed for the conference. I suggest not allowing graduate students to review for their faculty member and trying to have more senior people review.

  4. I do not understand why meta-reviewer made the final decision on his own instead of taking other reviewer's comments into consideration. The paper that has received three weak accept opinions was killed by his rudeness. That is quite unusual than other prestigious conferences such as SIGKDD, ICDM, and AAAI. In those conferences, papers were accepted even with two negative feedback if there is a strong positive feedback. I think the quality of ICML'12 is not very good this year in terms of organizing reviewing process. It gives people a bad impression, especially the organizers would not give those rejected paper any chance to rebuttal if the meta reviewers made his decision. That is not fair at all.

  5. I think ICML should have a Doctoral Consortium with some specific objectives. Some of them could be: 1) 5-10 mins presentation (accompanied by a poster in another session) by each candidate describing their work at a high level and its implications. This will serve three purposes: a) help both candidates and potential employers in their job search (academia or industry); b) community will get to know the best of the works in such a short time; c) candidates can communicate and network with each other (exchange of valuable information)

    This is not something new -- Theory conferences like ITCS (http://research.microsoft.com/en-us/um/newengland/events/itcs2012/program.htm) has it in the following name: Graduating bits Finishing Ph.D.'s and Postdoc Short Presentations.

    We could also consider recognizing the Best Ph.D dissertation every year. This is also not new -- conferences like KDD regularly do this.

  6. ICML only accepts a small fraction of papers - it is NOT true that only a small fraction of work going on in the field is worthy of presentation to the community. We are hurting our field by having only a small fraction of work being presented. We are also hurting our field by forcing reviewers to review things not in their area, and providing (often harsh and unjustified) reviews about things they do not know much about. This limits the accepted papers to mainly those for which the set of reviewers is very clear, and papers for which there is a precedent for how to review it. The papers that are really novel or define a new area are almost always rejected. I can give many examples of this just in my own experience, where a paper was either nominated or given a reward or accepted to a major journal in the field, and then rejected from a machine learning conference with poor quality reviews. Unless we figure out a way to accept more papers, ask reviewers not to pass judgement on papers that they truly have no experience working in or little expertise, and have a better assignment of reviewers to papers, we will continue to reject some of the most novel work going on in machine learning today.

  7. It is my first experience in ICML. I am disappointed because the paper I coauthored received 2 accept decision form two reviewers and one weak reject from another reviewer. The reviewer who rejected the our paper clearly specified that he is not an expert in the problem addressed by our paper. In addition he/she clearly mentions that his/her decision (ie, weak reject) is an 'educated guess'. The meta-review rejects the paper siting the main reasons mentioned by the non-expert reviewer. I believe the selectionof the reviewers should be more careful. The fact that the reviewers are volunteering their time should not be a justification about the risk of rejecting even one paper that is worth being accepted.

  8. Naturally, for a conference of this size, I have no expectation that reviewers will read my feedback. I was happy that the meta-reviewer read our feedback, and seemed to support our paper and agree with our arguments, but unfortunately they still had to reject it. Based on this experience, I'd be in favor of skipping response to the reviewers, and having a (much shorter) period where I could respond directly to the area chair. That way, they could have more autonomy to reverse the decisions of the reviewers given a strong rebuttal. Also, I wouldn't have to pander to less-than-stellar reviewers.

    I also didn't like the format of the response. 4000 characters meant that I had to waste my entire rebuttal explaining the paper to the reviewer who hated it so much that he didn't bother to read it, and I had no space left to respond to the legitimate questions of more favorable reviewers. Perhaps the character limit should be per-reviewer, rather than for the entire rebuttal?

  9. Comments on the review process:

    Though my paper was rejected, I find the reviewers are reasonable. It is very impressed that ICML this year has the rebuttal from the reviewers, giving me confidence that some (2 out of 3) reviewers did bother reading my rebuttals.

    A reviewer asked me to compare with other (not very relevant) techniques and given 1 week time frame, it was practically impossible to do so. So I wish future ICMLs will extend the rebuttal period.

    General comments for improvement:

    I have always admired the transparency of ICMLs though I've never got a paper accepted. But here is a small suggestion, which I hope it will be looked at. My point is that no matter how you try to improve the review process, there will always be good papers being rejected and non-outstanding, even flaw or inferior, papers being accepted. Here I provide an example of my colleague's comment on an accepted ICML paper this year

    http://atpassos.posterous.com/icml-2012-reading-list

    Essentially, a paper titled `A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods with Fast Maximum Entropy Training' was accepted to ICML 2012. This paper lacks a survey of a related methods in undirected graphical models and it did not bother to evaluate on data size of today's standard. I quote my colleague's comment

    'But I'm shocked that the authors didn't even search for existing papers in this area of undirected graphical models for collaborative filtering. At least keywords like 'Markov random fields', 'Boltzmann machines', 'Markov networks', 'Markov relational networks' should have been tried. But they didn't. And the reviewers did not catch that either.

    A terrible thing is the experiment itself, today you can run with 100M ratings easily with standard laptops, and here they run with 1M! In 2007, the AusDM reviewers did complain about the data size when we try the 100K set, and the NIPS reviewers didn't even care about the contributions.'

    Social networking technology is being popular nowadays, and I strongly believe that a common place for people to point out related works to accepted papers, make comments, so that other researchers, especially PhD students have a better perspective of an accepted paper despite all possible unavoidable limitations by the review process.

    Thanks again for conducting the survey.

  10. This conference was extremely well executed.

    I felt 'taken care of' and that there were safety nets for poor decisions at many levels.

    My only complaint is (and perhaps this is my mistake?) that the ability to upload revisions was not well publicized. so some people that coincidentally had more time that week were able to do so, other papers not. similarly, some papers had reviewers that read that stuff, some did not.

    In general, I think rebuttals are excellent (can clarify gross misunderstandings, which are inevitable due to time constraints), but that revisions are strange; in particular, it is vague how much they should matter. For one paper I reviewed, the revisions resulted in an almost entirely new paper. It was frustrating, demanded a lot of time, and it was unclear if it was fair. What was the point of the first round of revisions? Also, with enough effort and pandering to the reviewers, could the revision phase accidentally cause a bad paper to be accepted?

    Lastly, I think it is important that reviewers assign scores BEFORE seeing other reviews (this was discussed on hunch.net, or perhaps by jl in comments); I think it is important for the first round of reviews to be as close to independent as possible. One review I took part in, a super-famous person showed up, said the paper was terrible, and it was simply impossible to have discussion thereafter. The single person who had liked the paper beforehand (not me) completely wilted. But I felt they made valid points, and the paper was actually rather borderline.

  11. I have tried my best to submit the papers,,including a full paper,, supplementary material (a demon of software) and revised paper during author response,, but it is so disappointed that all these efforts have no use for that...

  12. My submission was in learning theory. The reviewers appeared not to be. The paper was accepted in the end, so perhaps its all acceptable noise.

    With online proceedings, follow the lead of (eg. COLT) and allow arbitrary length appendices. By all means impose a rigid 8 page limit on the main body of the paper.

  13. Everything is good except for the two-tiered talk system based on reviewer scores. Reviews rarely contain more than 1 bit (accept/reject) of useful information about the quality of a paper.

  14. My paper wasn't accepted, but I felt like the reviews were not as hostile as I have had at some other conferences. This may just be random chance, but I appreciate the comments that I received.

    There was certainly some misunderstanding about what our paper was trying to accomplish, but overall I agreed with the reviewers criticisms.

  15. My paper was rejected because according to the reviewers the idea was new and interesting but was not conclusively proved (there was no time for computational experimentation). I think this is a poor rationale for a conference review process. This is the last time I submit for ICML, as i think that the reviewer pool is all over the map.

  16. My responses to 'The majority of the reviews of your paper were...' is biased. Of my two papers, one seemed to have mostly reasonable reviews.

    One had two reviewers, one of whom did not understand the paper and one of whom appeared to have a vendetta against the research. The 'vendetta' reviewer gave a three sentence review, defending a paper that we pointed out had significant errors, and did not address our feedback at all.

    It might be nice to implement something so that obviously sub-standard reviews--for example, extremely short or abrasive reviews--were disallowed, or noted by area chairs.

  17. One day or half day should be really added.

  18. Although I was really excited about the author response and option to enhance / edit the paper based on the reviewers' feedback, I was highly disappointed when I found out that the reviewers didn't read the updated version (in which all their comments / concerns were addressed), and didn't even bother to update their review to match the last version of the paper. I ended up with reviews that raised comments and points already addressed in the latest version of the paper, and referring to discussions and typos already fixed. If I knew that the updated version will not be considered, I wouldn't have spent much time on running new experiments and adding whole sections of the paper to address the reviewers comments and feedback

  19. Basically the reason I feel that the author responses were meaningless is because the final decision justification was based on reviewer comments that were actually fixed. In fact, the errors in the paper were found prior to receiving the reviewer responses and new proofs had already been derived in anticipation of them. The decision rationale claimed that the paper had errors when in fact the revised version had none. If there is a response period, then at least the responses should be read. I figure that the committee performs a triage on the papers and views only some of the responses, but not all. Basically those borderline papers that eek it into the top 242.

    If that's the case, then maybe responses should be denied to those that won't ever be looked at again. Why put the authors through the trouble? Our egos can handle an outright reject (it's part of being in academia)

  20. I think AAAI and IJCAI are already such long experiences, that it would be hard to colocate with them.

    Uploading revised papers puts too much load on the reviewers.

  21. I felt that the reviewers pretty much disregarded both of our author replies, even though there were factual errors (about the supposed similarity of another paper, which is actually a quite different approach and not directly comparable, as we explained in the reply; and about the 'questionableness' of the results, which seemed to result from a failure on the reviewer's part to understand the different expected behavior of training data and test data). It was very frustrating and basically seemed like a complete waste of time to even bother with the author response.

    In general, I think that ICML has devolved to primarily accepting a certain type of paper. Incremental improvements to classifiers, particularly very mathematical approaches like manifold and kernel methods, are good. Applications papers, papers that explore different aspects of machine learning (especially when no standard benchmarks exist), papers about machine learning in context (robotic learning, goal-directed learning, lifelong learning) are bad. The conference has become more and more 'NIPS-like' and shows little sign of being willing to change direction. I have pretty much given up attending the conference but thought I'd give it another try this year. I think it is really unfortunate that the ML community seems to have become so narrowly focused, and there doesn't seem to be any real leadership behind trying to broaden the set of 'acceptable' topics/problems/methods.

  22. Although they could not bring any heartquake, the works on less audacious tasks have more chance. Why must we modify the old method? Why must we link with existing methods, while introduce a new method to solve a open problem? While one thing can be clearly expression by theory, why must use experiments to demonstrate it? The reviewer can say he do not know the background, but he can say'strong reject'. What a 'good' logic!

  23. I'm looking forward to ICML and appreciate the opportunities it presents. I hope to have some very interesting conversations and collaborations.

    With regard to reviews: One of our reviewers seems not to have actually read the paper, as their comments were surface-level (from the opening sentences of the introduction and results) and tangential (including references to literature which did not relate to our paper). I'm not sure if this is because we selected topic areas poorly and we were assigned someone who didn't know where to start, or perhaps because that reviewer simply found our topic not worth their time. While I can't know that reviewer's state of mind, I believe that a different review would have been much more helpful to me as an author, and to our area chairs, who ultimately needed to decide whether or not to accept our paper. Fortunately for us, the paper was in fact accepted, but I also wonder if that review / some aggregation of reviews influenced our selection for a short talk, vs. a long-format talk.

  24. I would like to see all accepted papers as full presentations.

  25. There is considerable variance in the reviews that this form fails to capture. In the case of one of my papers, the reviewers did not understand the paper, while for some others they understood most of it. I don't blame the reviewers, but I tend to blame the area chairs for not pushing for better reviews and for making sure that the reviewers do understand the papers. However, this is not to say I was very disappointed with the reviews in general.

  26. Regarding the question - The majority of the reviews of your paper were:

    more appropriate answer is that it seems one of the reviewers had something else in mind while reading the problem statement. Hence, i feel that he did not understood the whole paper.

    But I am satisfied with the other two reviewers comments.

  27. Reviewing at machine learning conferences has become pure nit-picking. Papers with big ideas are rejected while picture-perfect incremental papers (that get us nowhere as a field but are thorough) get accepted. Also, it's all about fads (nonparametric Bayes, deep belief nets, regret minimization, etc.). It's the emperor's new clothes and nobody will say or do anything until it's too late and another community takes over.

  28. The two-stage reviewing in the previous year (with two deadlines) worked very well, please bring it back.

    Also:

    having full talks in same timeslots as multiple short talks is a terrible idea. Please don't mix the two: it's the worst of both worlds.

  29. Secondary AC was not helpful Reviewer assignment should be automated (not done by AC)

  30. The reviewes were technically correct, but we felt that they somehow missed the point of the paper. Also, we felt that there could have been some hind-sight bias (our approach and results may seem straight-forward, but only in hind sight). Our paper was also mostly theoretical and we made clear about that but we were criticized for not including extensive experimental work.

  31. Rejecting papers with an average score of 3.0 (compare http://hunch.net/?p=2517), i.e. weak accept, is quite tough and contradicts in my opinion the idea of 'giving controversial papers a chance'. In general, my impression is that the meta-reviewer have too much impact when they are allowed to reject papers that expert reviewer rated with 'strong accept' and 'weak accept'. You can see, I am a bit frustrated; but otherwise the revieweing process was examplary good.

  32. The process for new versions of the paper uploaded during the rebuttal period should be clarified. The amount of time is often too brief for preparing a fully revised paper, and it is very hard to estimate how a partly revised paper (say, a paper where one critical aspect has been corrected but that does not directly respond to all review comments) will be taken by the reviewers.

    The instructions for the new uploads should be clarified for both the authors and the reviewers, so that everyone would interpret the new versions in the same light. Perhaps now some reviewers already expect the authors to prepare a full new version and consider the rebuttal invalid unless it is accompanied by one. At the same time, some reviewers might not even look at the new version.

    Some alternative practical suggestions: 1) Make the rebuttal time 1-2 weeks longer and make revised versions obligatory. 2) Allow reviewers or area chairs to request revised versions if they consider it necessary for borderline cases. Then the authors would know that it pays off to do the extra work despite the brief time. 3) Allow the authors to choose what they intend to tell with the new version. Something like a choice between 'this is a fully revised version that would be the camera-ready' and 'this version clarifies one aspect of the paper as explained in the rebuttal, but would still be modified to take the other comments into account'. The authors can already do so by writing the same information in the rebuttal, but the instructions could explicitly mention this.

    Some statistics on how many authors submitted revised versions (and whether this correlates somehow with the acceptance) would also be nice. As the system is new in the field, it would pay off to provide maximal amount of information for all parties.

  33. I strongly dislike the idea of being able to upload a new version of the paper after reviews.

    As a reviewer, it massively increased my workload, as I suddenly had to re-read a number of papers in the very short time between rebuttals and decision (there was no way around doing so, as some authors used the rebuttal form to simply point to lines in the new version where they had supposedly addressed criticism). In the end, none of the uploaded new material changed my mind about a paper.

    As an author, I felt pressured to do like everyone else and upload a new version, even though the reviewers had little concrete comments other than typos. I didn't do so in the end, because I was attending another conference during the rebuttal phase and did not have enough time. My paper got accepted anyway, but I was worried for the entire decision phase that the reviewers would hold this decision against me (as somehow showing a lack of respect for the review process). Even if I had been in the lab during this period, the short time frame would only have sufficed for quick and dirty corrections.

    I much prefer the old deal: Everyone has the same pre-agreed deadline, everyone gets their fair shot at acceptance, and rejected papers then have a real chance of improvement before the next deadline.

  34. The author response period should be at least one week.

  35. The area chair and meta-reviewer reversed the reviewer decisions. He rejected a paper that was recommended for acceptance by two reviewers - one strong accept and one weak accept, and with weak reject by the third one. To support his decision, the area chair had obtained two very weak and disappointing meta-reviews, that repeated a few of the lame claims of the weak reject reviewer, and clearly haven't read the paper.

    As an area chair in NIPS, CVPR and ECCV, I know those conferences discourage such such single-handed decision based on the area chair own opinion, without having read the paper carefully.

  36. The lack of direct accountability of area chairs remains a problem. ACs can reject (or accept) a paper without any consequences. In some journals like JMLR, the associate editor assigned to a paper is known to the authors. Therefore, all decisions are taken much more carefully. This comment applies to all ML conferences that I know of.

    Of course, this might create pressure on junior faculty but the number of biased decisions could be greatly reduced. Careers (in particularly the ones of junior researchers) are decided on decisions which are opaque.

  37. The metareviews were factually incorrect - rather than being a summary, they explicitly added new conclusions, which were erroneous, and to which there is no mechanism for response. This seems arbitrary at best, and somewhat lazy at worst.

  38. The reviews in other areas of computer science (like theoretical cs) are much more thorough and correct.

  39. I think it would be good to require a new review if reviewers declare to have considered the revised paper after author response. In my case none of the reviews changed at all but two of three reviewers were marked as having considered the revision. This way, as an author, I get the feeling that the response process was not taken serious, even though I put work in addressing the reviewers remarks.

  40. I have this unsatisfactory feeling that for a lot of papers, the final decision turns out to be mostly based on chance (this feeling is shared with NIPS).

    From the reviewer point of view, there are so many submitted papers that I tended to assign a lot of weak rejects to help making a choice, since a lot of papers would have to be rejected in the end; only for papers I really thought they had to be accepted did I assign an accept (either weak or not). In retrospect, I wonder whether this was a good choice, and I think if this policy was not shared by most reviewers, it is not a good one.

    From the author point of view, it is obviously always painful to have one's papers rejected. I submitted 3 papers, and had 0 accepted. 2 were heavy in maths, introducing novel ideas, with some experiments, 1 was introducing a novel idea with quite a lot of experiments to assess it. A large majority of reviews were unsatisfatory, either missing completely the point (probably it was stated too unclearly), or showing a lack of background from the reviewer to be able to raise the really good questions. In the end, I find it very difficult to tune a paper for a submission to ICML: most of the times, it is found either too theoretical, or too practical. At the same time, we have the feeling that some (a lot of?) accepted papers should have been blamed as ours...

    ICML and NIPS are victim of their success. I have the feeling that something around 40 % (or more?) of submitted papers are worth being accepted. It is really a waste of time, effort, ... for the whole community to reject so many good papers. The presentation of new results is delayed, this is not good for research, and not good for authors. I think ICML should grow to a larger number of parallel sessions; I totally dislike the idea of having 20+ sessions in parallel as in certain conferences, but at ICML, there are half-days during which no session has any appeal to me. Having one or two extra sessions in parallel, thus accepting more papers, would be interesting.

    Finally, I acknowledge the effort you do this year to try new things to improve the selection process at ICML.

  41. I feel like having an author response is incredibly helpful as both an author and a reviewer. As an author, even though reviews haven't necessarily changed, I feel like the discussion/paper decision seemed to have been influenced by rebuttal. As a reviewer, the responses have helped clarify aspects of a paper and has helped me feel more/less confident about my review. ECML /PKDD did not have rebuttals this year, and as a reviewer I really didn't like it.

    I really liked being able to see the final score the authors gave to the paper. That should be done every year.

    In terms of the review process, I think reviewers should give more credit to looking at new problems/applications in the work. I had one paper that I felt was partially penalized by a reviewer because we only looked at a suite of problems in a new and important application domain but did not run on some standard benchmarks. I think credit should be given for looking at a new application when there is a technical innovation required to analyze that domain.

    I also think that care should be taken to ensure that there is diversity in terms of the papers submitted and the papers accepted. This year, for the first time, I felt like AAAI had more ML papers submitted that were of interest to me than ICML did.

    I also think getting the camera-ready instructions and presentation/poster instruction out a bit earlier would be good.

    I also think it may be worth while to make the submission paper length be 8 pages but give accepted papers a 9th page b/c reviewers usually (correctly) ask for additional detail in the paper.

  42. As the author of a rejected paper anything we say may be interpreted the wrong way; but we did find it odd that mr1 indicated reviewers had agreed to accept and then mr2 rejected based largely on what seemed to us to be subjective opinion. Perhaps this should be better reconciled?

  43. I cannot tell whether the rebuttal or paper submission was considered by reviewers. You should ask the reviewers that.

  44. The reviews were frankly disgustingly ignorant. The anonymity offered by the process seems to bring out the worst in the egos of the reviewers with a combination of petty pointless nitpicking masking ignorance with a total lack of any kind of scholarly humility. I guess this rant can be construed as me being bitter, but this process is disillusioning and only seems to promote a dangerous kind of homophily in ML research.

    I strongly urge a move towards removing anonymity. The purported benefit of it i.e. removing intimidation of big names is an insidious concept to protect.

  45. Some authors didn't read the author response. If they don't intend to do so, please explicitly indicate in the feedback.

  46. It would be nice if the reviewers could see the final decisions including meta-reviews!

  47. Incentives should be given to reviewers for reading authors' rebuttals and take them into account. Incentives should be given to reviewers to promote diversity in topics and methods of submitted papers.

  48. - Authors should also have the ability to judge the quality of the reviews. - Actions should be taken against reviewers that mainly decide for acceptance or rejection based on a proper comparison with their own work, even if that work is not relevant.

  49. secondary meta-reviews added an unreasonable amount of workload, and the instructions regarding them apparently changed during the process

  50. I like the opportunity to have a look at all the papers during the evening poster sessions.

  51. On three reviews, I would like to mention that I had a 'Right to the point' review.

  52. I think it would be nice to leave more time for authors' rebuttal phase. This year there were only 3 days, during which the authors needed to prepare the rebuttal(s) and update their paper(s); can be hard to finish on time for people who co-author more than one submission.

  53. The 3 initial reviews were fair and I provided detailed answer to the questions the reviewers raised. However, a fourth reviewer was added after the rebuttal:

    this reviewer did not understand the paper at all and barely read it, contrary to the other reviewers, and since it was after the rebuttal, I could not even provide feedback.

    I think this is really bad to add additional reviewers after the rebuttal without having the possibility to give feedback.

    Also, since the rebuttal period is very short, I do not see the point in enabling authors to provide a revised version of the paper, after taking into accounts the reviewers feedback: either the change is marginal and then there is no reason to provide such, or it needs more care and then this would require more time (I think somewhere between 10 days and 2 weeks should be ok).

  54. I wish I'd written a better paper. Comments were somewhat helpful. I couldn't tell if there wasn't much appreciation for theoretical contributions or if the reviewers missed the relevance. Cheers.

  55. Tone and quality of meta reviews were particularly bad this year.

  56. The reviewers of my papers highlighted the drawbacks much more than the novel contributions -- in fact they tried to completely ignore the benefit by either not mentioning it or by belittling the experimental results. I understand the Program Chairs can do little about reviewer selection and in some community (e.g., the one pertaining to my submitted paper) are very protective about accepting new ideas, nonetheless the following measures can be helpful:

    1. Instead of just checking a radio button that the reviewer read the rebuttal, it should be mandatory for them to summarize the rebuttal and the changes/improvements reported in a new version, if any.

    2. Reviewers MUST provide suggestions how to improve (or how to present) the manuscript so that it gets accepted and/or point to the appropriate venue.

    3. In case of a resubmission, from AISTAT or NIPS, the paper should get a new set of reviewers so that some of them are responsible and will not judge it subjectively. Indeed, if a reviewer rejects a paper once, he/she is more probable to reject it again than others.

  57. Outstanding transparency of the whole process this year. THANKYOU.

  58. Reviewers (including me) have a tendency to do the reviews at the last moment possible and this results in cramming all the reviews and lowering the quality. It would be good if the reviewers can be forced to spread out the reviewing load over time. For instance, the two stage reviewing that had been tried in the past years achieved this.

  59. more reviewers that understand the basics of bayesian statistics!

  60. I dislike the trend of allowing authors to change the papers after review. If this is a good idea for getting quality papers, then computer science should move to journals, like every other field. But conferences are a different beast. Reviewing 8 papers and *then* having to look at revisions of them is too much for already taxed reviewers.

  61. 'The majority of the reviews of your paper were:'

    has a more complicated answer than the multiple choice above allows: 1 low quality, supremely focused on citing some prior work that was somewhat off topic--- I don't think he understood why it was off topic. Didn't address my rebuttal why. 1 high quality, including helpful suggestions. 1 didn't quite understand the paper

    A couple reviewers criticized that the methods I tested for my new problem setting were fairly straightforward / incremental. The degree to which novelty is promoted over more straightforward/sensible methods may lead to the conference proceedings being full of complicated, novel methods---biased away from the sensible/simple methods that (a) need to be tested/compared for the sake of science, and (b) are often more robust and reproducible.

    Recall multiple studies that tried to reproduce a variety of published methods and found that they rarely performed as advertised on different datasets. I've experienced it myself with 6 'fast, high-quality' text clustering methods, as well as with some other methods. This is the elephant in the room.

    Reviewers should constantly push to get authors to release datasets and *code*, so that others can reproduce as well as compare new methods against these supposedly good baselines.

  62. (1) I wish our paper had been revised by someone not so biased against C4.5 (the algorithm we chose to compare ours with).

    (2) I do not think the paper was carefully read by two out of the three reviewers.

    (3) Our answers to their comments were ignored. Particularly, one of the reviewers seemed to have no idea about what he was talking about in his review.

    (4) Before submitting to ICML I had a good look at past ICML proceedings. Honestly, compared to several (from the past editions) ours could confortably be placed among those accepted.

    (5) We were invited to submit our paper to a Whorkshop. We did and it was accepted. However, as the Workshop will only have an electronically available proceedings, we will not be able to get any financial support from Brazilian Funding Agencies and, for this reason, we gave up submitting the final version and as consequence, will not be attending the event.

  63. I strongly support author feedback and think it is a valuable resource for an area chair.

    I would have liked to write more positive things about it as an author as well, but there was absolutely no reaction to it, so I have to assume they just did not read it. This is not a rant because my paper was rejected, it's just how it turned out this year. I've had positive experiences in the past as well (partly also with rejected papers).

    What I would like: authors can rate reviewers (simple 1-5, possibly according to various categories (length of review, etc.)), this feedback is collected over the years, and used for selecting and filitering area chairs and reviewers.

  64. It is not normal that certain reviewers accept to review some papers that are not in their field.

    Some reviewers ask that certain American papers and probably their papers are cited while other non-American papers are completely ignored. You see a lot of ICML papers which take earlier works in europe that are published.

    How the program of committe is constitued ? A renewal and a call to be a member must be offered each year.

    Thanks

  65. I really really like that every paper has a 20 minute talk, and I'm quite disappointed it's not the case this year...

  66. The paper matching system seems work well.

  67. This was my first time submitting to ICML. I felt the 2 out of 3 reviewers were really brief (the community I come from, short negative reviews without constructive feedback are considered an insult, only positive reviews are permitted to be short, and this rule is diligently enforced). The reviews lacked justification for what they said. It is important that reviewers take up the task of carefully justifying their thoughts, especially if they are negative, and providing constructive feedback. Given all the work authors put in, few-line reviews gives an outsider the impression that the entire community is flaky; and that people do not take being on the TPC seriously.

  68. We were very happy that our paper was accepted, but had a very poor sense of what the reviewers discussed after receiving revisions. The reviews listed on CMT at the end were unchanged from before, but the meta reviews hinted that the reviewers were 'very pleased' with the revisions. Furthermore, it wasn't clear if the accept/reject decisions were the reviewers' initial response or the response after revisions. In general, more transparency would be helpful for understand what was or wasn't well received.

  69. The question about the quality of the majority of reviews of our paper hides a fair bit of variance -- we had one highly-confident but wrong review, while the others were good. We were pleased to see that the ICML review process was robust to this problem.

  70. During the author response it would be nice to see the reviewer's scores of the paper. For my paper it seemed that the reviewer's comments (mostly positive) did not correlate well with the reviewer's scores (all weak rejects).

  71. It's unfortunate that the author response is not read by some reviewers, but their opinion still used by the meta-reviewers.

  72. Two of the reviewers didn't understand my paper and so rejected it because they believed it had TECHNICAL ISSUES!!! I rewrote the whole theory section (3 pages) to help them understand it, and finally they didn't read the re-submitted paper, and rejected the old one again! The most interesting part was that, there was a question for them after the rebuttal phase 'I have read and considered the authors' response. (To be answered only after the response period.)', and BOTH OF THEM ANSWERED NO!

    1- I really prefer when someone considers himself as an EXPERT in machine learning, at least has the ability to follow some linear algebra! ... apparently they couldn't ...

    2- I'm even OK if the reviewers do not read the resubmitted papers!!! BUT PLEASE remove this stupid question 'I have read and considered the authors' response.'; Is it there just to humiliate the authors?!? For me it was like 'I had rejected you, and I'm telling you that I didn't want to consider your answer, so I'm still rejecting you!'

  73. Allowing substantial changes and uploads of new versions of submissions moves this conference too much towards a journal model. The aim should be to decide whether the snapshot of the work taken at submission time is ready for prime time or not.

    As an area chair, I did not like the procedure of selecting reviewers, as for many of the papers I was responsible for the most appropriate reviewers were at their quota. One other quibble I had was that I do not think the policy of allowing the main AC, sometimes with the help of the secondary, to make the accept decisions is a good idea. Some mechanism for having broader discussions about borderline papers amongst ACs would be good.

  74. If revised papers are going to be allowed during the response period, there need to be more precise and constraining instructions on what edits are allowed. Some authors took this as an opportunity to, essentially, completely rewrite their paper. It is unfair to expect reviewers to evaluate an entirely new manuscript on short notice, and unfair to authors if they don't know what changes they can expect to be evaluated.

  75. - Reviewers should remain (and should be enforced to remain) professional when writing reviews. There is no need to write that a paper is 'trivial' or 'boring' multiple times in a review.

    - ICML should be more open to accepting more mathematically-inclined papers. Reviewers should not be scared away/turned off by abstract mathematical formalism.

  76. The reviewers seemed mostly to like the paper: weak reject, weak accept and strong accept. In the rebuttal we provided clarifications to the few concerns raised by the reviews. The two meta-reviewers seemed not to like the paper, with only very brief remarks and no clear justification for rejection given the previews positive reviews.

    Given the positive initial reviews we got we would have expected a more detailed account by the meta-reviewers raising major problems with the paper.

  77. Our reviews were of really poor quality more seeking to trash a paper (which apparently has become a routine attitude for reviewers) than coming up with an objective decision. The comments by one reviewer on the theory (stupidly just repeated by the AC) is completely wrong. Other complaints were also completely unreasonable. In fact, it is likely that they are by Tobias Scheffer with whom we have communicated since: this has demonstrated that the comments are ridiculous. Terrible work!

  78. The most negative review never bothered to read our response, which is insulting! The chairs should make sure that at least the reviewers press the 'I read' button.

    The feedback meta-review was confusing, since after the first round itr said 'probable accept' and the final outcome was 'reject' where the only additional input was our response !

  79. It is normally a big PITA to review 8 papers for ICML, but this year all of the papers were extremely relevant and so I was happy to help with the reviewing process. In general, the system seemed to work quite well and most people that talked to seemed happy with it.

    However, I also had two very negative experiences with the ICML review process this year. One as an author, and one as a reviewer. I believe that both of these experiences would be alleviated if the area chair associated with papers was not anonymous, since there currently seems to be little accountability in the process.

    As an author: Our paper was rejected, in what I believe is the worst decision that I've seen as an author/reviewer (normally I agree with accept/reject decisions, and I at least understand why a particular decision was made, even if I may not agree with it). From our point of view, it seems that the paper was rejected for largely political reasons (perhaps it did not fall along the right party lines or we should not have revealed our identities by placing the paper on arXiv), and we were very unimpressed with how dismissive the area chair was of the work. I have seen this sort of thing at NIPS before, and occasionally at some other conferences, but was very disappointed to see it at ICML.

    As a reviewer: I understood/agreed with the decision made on 7/8 papers that I reviewed. For the remaining paper, I am very unimpressed with the decision made. In the initial submission, the authors claimed a well-known algorithm as a novel contribution and made a variety of blatantly- and provably-false claims. This was pointed out, and with the author response the authors submitted a significantly revised paper, completely changing the abstract and what the claimed contributions of the paper were. Since the new paper seemed to address the reviewer comments on the previous version (and since the reviewers did have time to re-evaluate the completely-different paper during the short discussion paper), the paper was accepted.

    To me, the discussion period either needs to be longer or there need to be limits placed on the extend of the changes that the authors are allowed to make in the resubmission. I really feel like the authors took advantage of the system and were able to publish a paper that, although promising, was not ready for publication and was not properly reviewed in the form submitted.

  80. My only disappointment is regarding the assignment of durations to the talks. From the reviews we had (three strong accepts), we were quite confident that the paper would have a 20 minute slot. However it ended up with a 5 minute one. If there were some additional hidden criteria, it would have been nice to know about them. Otherwise, from outside it seems like the 5 minute talks were given to the borderline papers.

  81. Organizers could give a price to outstanding reviewers (as in CVPR) to encourage and reward good reviewers.

  82. The Program Chairs did a *fantastic* job this year.

  83. I feel like the review process is getting to selective, with reviewers expecting journal quality papers in an 8-page conference format. It's simply not possible to have full derivations and a full set of experiments with so few pages. However, in the ICML reviewer guidelines it states that 'reviewers are responsible for verifying correctness'. If this is the case then there should be some way to attach a full derivation to the paper that the reviewer can examine. From my understanding, the reviewer is not obligated to look at the supplementary.