I think that I’ve mentioned before that I do a large amount of manuscript reviewing. Most of the reviewing I do is for a particular journal in my field and I get almost all of these reviewing opportunities from one Editor. While some may look at reviewing as a burden, I do not look at it this way and the job is made more pleasurable because this particular Editor tries to send me manuscripts for review that she is confident that I would read anyway. She does a very good job of judging this and I rather enjoy getting to take an early look at work that I am interested in and I also enjoy having the opportunity to help authors improve on their manuscripts (many of which are excellent to begin with).
As a reviewer you eventually get to see what your fellow reviewers have to say about a manuscript (generally right after you submit your reviews). I usually try to take some time to check these other reviews out and as a general rule those other reviews are very similar to those that I submit. On occasion; however, they are quite different. Sometimes another reviewer will point out something that I overlooked or they will have a problem with a manuscript that is likely based on a difference in our specializations. All too often, though, the other review is completely worthless. What do I mean by this?
There are occasions where a review that essentially says that a paper is pure crap in kinder terms is warranted. Sometimes you get a paper that is written so poorly that you have no idea what is going on. There is really no way to suggest to improve on such a paper other than saying that the authors really need to work on the quality of the written manuscript. I have also received manuscripts that completely lack controls (in which case you can reject and simply state the controls that need to be done) or manuscripts that are based on a false premise (in which case you can simply tell them where they went wrong). Sometimes a stock-critique (like the paper lacks focus or there is no hypothesis) is warranted. In this case evidence must be given by the reviewer to support such a claim. If the stock-critique is warranted it should not be hard to elucidate to the Editor evidence that this is true.
The above instances do not describe two recent reviews submitted for papers that I thought were quality manuscripts. While I am not going to quote the actual reviews, they both boiled down to a few sentences laden with the usual stock-critiques. The paper lacks focus, there is no clear hypothesis, the images are of low quality and the findings are not clinically relevant. Not a word of support for such statements (and not even a word to indicate that the reviewer actually read the manuscript), just pure regurgitation of keywords geared to move the Editor toward the direction of rejection. Except that is never what happens.
I’ve been through this enough times with this particular Editor at this particular journal to know what is going to happen next. A new reviewer is going to be invited to look at the manuscript and the time to decision is going to be delayed. This creates extra work for another reviewer, it creates more work for the Editor and it delays the progress of the authors of the manuscript.
So what is the lesson here? Do a good job when you are invited to review. Always give evidence of your claims and try to improve the quality of the papers that you review. Sure, you’ll never get credit for making a manuscript better but there is some satisfaction in seeing a final product that is quite polished based on some suggestions that you made to the authors. Finally, remember that as a reviewer you are not the gatekeeper for publication. This is the responsibility of the Editors. Your job is to provide advice to the Editor and to the authors to help everyone add quality studies to the archive that is the published scientific literature.