Position: Want Better ML Reviews? Stop Asking Nicely and Start Incentivizing with a Credit System
Abstract
With soaring submission counts, stricter reciprocal review policies, widespread adoption of platforms like OpenReview, and without the offsetting pressure of publication fees, the machine learning (ML) community has one of the largest scholarly presences among all scientific fields. And yet, almost everyone has many unpleasant things to share about their review experience. Worse, there is little public space to seriously discuss — let alone debate — what makes a review system effective or how it might be improved. In this position paper, we expand our discussion on two core problems: How can we reasonably limit the number of submissions? and How can we incentivize good and discourage bad review practices? We first assess the strengths and shortcomings of existing attempts to address such problems. Specifically, we present four takes on some popular conference mechanisms and propose two alternative designs for improvement. Our general position is that meaningful improvement in ML peer review won't come from polite best-practice suggestions tucked into Calls for Papers or Reviewer Guidelines — it requires enforceable yet fine-grained procedural safeguards paired with a currency-like credit system (what we call OpenReview Points). ML practitioners can “earn” such points by contributing good review practices, and “spend” across one or multiple major conferences to redeem different kinds of “perks” — such as complimentary registration or the right to request additional review resources.