During ICML reviews I noticed that my personal take on reviewing is becoming increasingly distinct from my peers. Personally, I want to go to a conference and come away with renewed creativity and productivity. Thus, I like works that are thought provoking, groundbreaking, or particularly innovative; even if the execution is a bit off. However, I suspect most reviewers feel that accepting a paper is a validation of the quality and potential impact of the work. There's no right answer here, as far as I can tell. Certainly great work should be accepted and presented, but the problem is, there really isn't that much of it per unit time. Therefore, like a producer on a Brittany Spears album, we are faced with the problem of filling in the rest of the material. The validation mindset leads to the bulk of accepted papers being extremely well executed marginal improvements. It would be nice if the mix were tilted more towards the riskier novel papers.
The validation mindset leads to reviews that are reminiscent of food critic reviews. That might sound objectionable, given that food quality is subjective and science is about objective truth: but the nips review experiment suggests that the ability of reviewers to objectively recognize the greatness of a paper is subjectively overrated. Psychologists attempting to “measure” mental phenomena have struggled formally with the question of “what is a measurement” and lack of inter-rater reliability is a bad sign (also: test-retest reliability is important, but it is unclear how to assess this as the reviewers will remember a paper). So I wonder: how variable are the reviews among food critics for a good restaurant, relative to submitted papers to a conference? I honestly don't know the answer.
What I do know is that, while I want to be informed, I also want to be inspired. That's why I go to conferences. I hope reviewers will keep this in mind when they read papers.