There are a great many ways in which that ideal can fail. I draw a great deal of schadenfreude from reading Retraction Watch, which is effectively a blog about cases where peer review failed in one of many ways, something was published, and mistakes or misdeed were later found out. I, like most scientists, know a few people whose work may show up all over Retraction Watch some day.
Which brings me to the fact that I am currently figuring out how to respond to a review that has failed with regards to independence, expertise, detail, fact, specificity and constructiveness. I would have suggested to the journal that this person could not be an independent reviewer, except that it never occurred to me that anyone would consider him to know anything about the topic of the paper. Explaining the long history of this interaction to the journal, we have now been assured that our re-submission would be sent out to different reviewers. Even so, in resubmitting, I have to respond to all the reviewer's comments, even those that are wildly counterfactual, have nothing to do with the current manuscript, or are just complaints about the reviewer's own work not being cited more extensively. And it has to be done politely and factually. So one must never include responses like these:
- This highlights a fundamental difference in approach to science. Reviewer's comment, and publications, suggest that scientific papers should be fishing expeditions in which everything that can be gotten out of a data set is analyzed and those results that test significant published breathlessly. We started with one, a priori original question, gathered all of the available data to address it, and got a clear result, which we state concisely. While some authors would stretch the results section out to numerous exploratory paragraphs, expounding upon questions that were tailored to fit the results of the numerous analyses, that would surely be a disservice to science.
- It is not clear what this means. Perhaps the reviewer did not find our Methods section. It is in there between the Introduction and the Results.
- It does not seem that the Reviewer has any idea what kind of data we are using, despite the six paragraphs on the topic.
- Furthermore, a reading of the manuscript would have revealed that no matrix models are employed. Reviewer's comments would seem to be hastily copied and pasted from review of an unrelated paper.
- The Reviewer's publications are not relevant or useful here. Perhaps they were relevant to the paper for which most of this review was written?
- This is counterfactual and the Reviewer has excellent reason to know that.
- These quotes of the journal's rules are from an entirely different journal that the Reviewer often reviews for.
- Not only can we find no mention of this statistical rule anywhere, we note that Reviewer's own papers don't follow it. We asked an expert in these methods about this 'rule.' She called it, "hilariously made up."