Thursday, January 29, 2015

Insignificant figures

Before writing a paper I make lots of figures. Some will be improved upon and included in the paper. Others just help me understand what the data are telling me. Graphical data analysis.
Still others, like this one, end up being rather pointless exercises. This categorizes those barnacle larvae that died during our study, based on the stage during which they died, and whether they spent abnormally long in that stage and/or the previous stage before dying. The reason this is pointless, despite the nice colors and nested boxitude, is that all the same information can more compactly and clearly be displayed in a table. However, I now know how to use, and when not to use, the R package treemap.

Saturday, January 24, 2015

Choosing peer reviewers

Several years back, I was unbearably excited to be, for the first time, submitting my own manuscript to a real scientific journal. I'd spent months polishing it, and was totally confident it should be published. Working through the various steps of submission (input author names and contact info, input title, input abstract, keywords, ect.) I was surprised to be asked to input names and contact info for three recommended reviewers. Defendants in courts don't get to recommend specific peers to serve on their juries, how could science be served by asking me to recommend peers to evaluate my work? Confused, I emailed one of my advisers, who happens to be an outspoken crank when it comes to all of officialdom, and came away with the impression that this was not to be taken too seriously. I hastily plugged my keywords into Google and chose three prominent names I had never heard of before, who went on to tell the journal my paper was technically sound but not of the greatest interest. I was extremely lucky in that the editors of the journal found it very interesting, and published it anyway. Failing to have learned my lesson, a few papers I submitted since then were sent to unfavorable reviewers that I had recommended.

Now, journal editors can decide to ignore these recommendations, and those invited to review can say no, but in a large portion of cases (I have no data, this is simply a strong impression), some or all of those recommended end up writing reviews. The reason journals ask for such recommendations is to help the journal editors quickly find people who are highly qualified to review. I already know who is very knowledgeable about my specific topic, while the journal editors may not. Many journals have editors who are not paid anything for their service, and all have editors whose time is limited. Asking for and using these recommendations saves time, and probably helps avoid unqualified reviewers.

Doing so, however, has some pretty clear corrupting influence. Those who are good at this game pay great attention to whom they recommend, not only carefully considering the knowledge, viewpoints and interests of those they will recommend, but crafting the paper to raise as few objections as possible from these individuals. Papers that defy much of what is well established in one field regularly are published based on the recommendations of reviewers whose knowledge comes from another branch, and this fact is not ignored in making these recommendations. If the paper is very likely to be sent to a particular reviewer, that person's terminology and definitions will be used and his or her papers referenced. At scientific meetings, people will say, "I like this idea, list me as a reviewer." At its worst, peer review is reduced to a popularity contest, with well established authors having their work (not only journal articles, but also grant applications) evaluated mostly by their friends and allies.

Mixing moral distaste, political naivity, and hurry, I have generally spent no more than a few minutes on the question of who my recommended reviewers will be. The latest iteration of this came with a paper on which two of my undergraduate students are lead authors. We submitted it to one journal with a recommended list of reviewers that was hastily thrown together, experts on the organism we were working on but not necessarily interested in our particular topic. They trashed it solely on the basis that no one cared. We submitted the same manuscript, with very few edits, to another journal, listing reviewers with some knowledge of the organism, but a strong interest in questions related to our own. We got back three extremely positive reviews, praising our highly original and relevant work, recommending several very minor changes and urging the editors to publish this paper. Yesterday morning we submitted these revisions, and yesterday evening this journal officially accepted our paper.

So I've not only learned my lesson, but decided to heed it. Whom I ask to review a paper is (depressingly) almost as important as the quality of the work. I see truly terrible papers come out in excellent journals, presumably approved by carefully chosen reviewers, and some very good papers get rejected by less selective journals, in part because of poorly considered recommendations. From now on, I will put my qualms aside and think carefully, early in the process, about whom I will recommend as reviewers. After all, everybody else is doing it.

Friday, January 23, 2015

Very pleasing

There is something peculiarly satisfying about publishing an experiment which has, for its central instrumentation, a small magnet suspended by a human hair.

Wednesday, January 21, 2015

Good practice

Always order paper reprints for your undergraduate coauthors to give to their parents.