Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts

Friday, August 04, 2017

Meiosis kills! (Now in print, and video)


Scientists are rarely dispassionate about their research. Why spend years trying to figure out the fine details of something you have no interest in? Before my wife and I lost two pregnancies, I had thought abstractly about the question of why developmental failure is so common across plants and animals, but it wasn't personal. I was interested in the fact that dying before reproductive age means an individual does not get to pass on whatever traits caused it to die. In other words, natural selection should quickly remove any heritable trait that commonly causes developmental failure. At the same time, pretty much any organism loses some of its offspring, implying some broad based mechanism WAS commonly causing developmental failure. I even went so far as to publish a review article focusing on what this mechanism might be.

But once developmental failure became personal, I wasn't just interested, I needed to know.  

The result of that impulse was just published by Proceedings B, one of my favorite scientific journals. 



In addition, I worked with Sarah Friedrich, the extremely talented Graphics Specialist in my department, to make this video explaining the science in public-friendly ways:

Thursday, November 17, 2016

How not to respond to unhelpful peer reviewers

For as long as I've been a scientist, and longer, there has been extensive discussion on the many ways that peer review is broken. Peer review is how, in theory, science gets evaluated and hopefully improved, before publication, and therefore hard to dispense with, despite being widely seen as inefficient, biased, and corrupt. It goes like this: Author submits manuscript to journal, journal sends it out to independent experts for feedback, these experts (the scientific peers of the author) decide whether they are expert and independent enough to give appropriate feedback carefully read it, think about it, identify its flaws, make constructive detailed suggestions, and finally recommend to the journal whether it should be published as is, fixed and then reevaluated, or just rejected. That is, at least ideally, how is supposed to work.

There are a great many ways in which that ideal can fail. I draw a great deal of schadenfreude from reading Retraction Watch, which is effectively a blog about cases where peer review failed in one of many ways, something was published, and mistakes or misdeed were later found out. I, like most scientists, know a few people whose work may show up all over Retraction Watch some day.

Which brings me to the fact that I am currently figuring out how to respond to a review that has failed with regards to independence, expertise, detail, fact, specificity and constructiveness. I would have suggested to the journal that this person could not be an independent reviewer, except that it never occurred to me that anyone would consider him to know anything about the topic of the paper. Explaining the long history of this interaction to the journal, we have now been assured that our re-submission would be sent out to different reviewers. Even so, in resubmitting, I have to respond to all the reviewer's comments, even those that are wildly counterfactual, have nothing to do with the current manuscript, or are just complaints about the reviewer's own work not being cited more extensively. And it has to be done politely and factually. So one must never include responses like these:
  • This highlights a fundamental difference in approach to science. Reviewer's comment, and publications, suggest that scientific papers should be fishing expeditions in which everything that can be gotten out of a data set is analyzed and those results that test significant published breathlessly. We started with one, a priori original question, gathered all of the available data to address it, and got a clear result, which we state concisely. While some authors would stretch the results section out to numerous exploratory paragraphs, expounding upon questions that were tailored to fit the results of the numerous analyses, that would surely be a disservice to science.
  • It is not clear what this means. Perhaps the reviewer did not find our Methods section. It is in there between the Introduction and the Results.
  • It does not seem that the Reviewer has any idea what kind of data we are using, despite the six paragraphs on the topic.
  • Furthermore, a reading of the manuscript would have revealed that no matrix models are employed. Reviewer's comments would seem to be hastily copied and pasted from review of an unrelated paper.
  • The Reviewer's publications are not relevant or useful here. Perhaps they were relevant to the paper for which most of this review was written?
  • This is counterfactual and the Reviewer has excellent reason to know that.
  • These quotes of the journal's rules are from an entirely different journal that the Reviewer often reviews for.
  • Not only can we find no mention of this statistical rule anywhere, we note that Reviewer's own papers don't follow it. We asked an expert in these methods about this 'rule.' She called it, "hilariously made up." 
I need some empenadas.



Wednesday, November 16, 2016

Ghosts of papers that may some day be


The world is full of science that only half exists: Experiments done but not written up, manuscripts waiting for revision, results too unimpressive to prioritize for publication. Where fetuses are gestated for months but born in hours, data sets often take longer to put out into the world than they took to create. Until it is published, academic research is only a nascent fluffy squishy wispy gelatinous downy larval effervescent ephemeral eufloccinaucinihilipilificatable translucent apparition, neither seen nor heard nor here nor there. Once published, research gains visibility, permanence, and perhaps even value.

While most scientists have things they would like to get around to publishing,  I feel like I've accumulated a particularly long list of research projects I need to push out. This summer and fall I've actually had some time to dedicated to that. I've made a goodly dent, but the list is still long, and new tasks and projects emerge like mosquitoes from an abandoned hot tub.

I've published four good papers this year, another is ready to go as soon as my coauthor has time to look at it, and a sixth just needs a few final touches, and should be submitted in a week or two. Both of those 'full term' papers will, hopefully, come out next year. I think that's pretty good considering I spent most of the last year on intensive teaching, had a months-long battle with epidemic keratoconjunctivitis, have moved my family four times in the last year and a half, and have three children five and under. There are days I wonder why I am so tired, and then there are days I remember why I am so tired. And on those days, I don't feel the least bit bad about keeping all those manuscripts, and coauthors, waiting.

Friday, June 12, 2015

Back to posting: Seastar Video

It has been a long time. Here, to get things rolling again, is an awesome little video (with English subtitles) that SDU made about the discovery my students made (unexpectedly) and my friends and I helped them publish. The part at the end where the starfish squeezes out the tag through its skin in slow motion is pretty damn cool.

Olsen, T. B., Christensen, F. E. G., Lundgreen, K., Dunn, P. H., & Levitis, D. A. (2015). Coelomic Transport and Clearance of Durable Foreign Bodies by Starfish (Asterias rubens). The Biological Bulletin, 228(2), 156-162.

If you want to hear more about this process and how awesome my students are, see this post and this post and this post.  Oh, and especially this post here.

Monday, March 16, 2015

Progress


For years, I've been worrying about about my chronic backlog of papers I should have written a long time ago and just haven't had time for. I'm very happy to report that my to-write list is getting a lot shorter. Three of the papers on that list will come out in the first half of this year. Number four is currently out for review, 5&6 need to be revised and resubmitted, a seventh is written and currently with colleagues awaiting their comments. The eighth has figures made and large chunks of text in their second or third drafts. If all goes well, all eight should be at least submitted by the end of the year, and I'm guessing that six will have come out. Of course there are several more that I need to get to, and new projects being planned, but it feels good to be clearing the backlog a bit. Especially nice is that after spending too long on methods papers, incidental discoveries and other tangents, the manuscripts I am working on now actually address the central points that motivated the research in the first place.

1. Levitis DA. (2015) Evolutionary Demography: a synthesis of two population sciences. In: International Encyclopedia of Social and Behavioral Sciences, 2nd Edition. ed.: J.D. Wright. (Coming out in May)
I am an evolutionary demographer, and while encyclopedia articles are not my bread and butter, this is very much on topic.

2. Olsen TB, Christensen FEG, Lundgreen K, Dunn PH, Levitis DA. (2015) Coelomic transport and clearance of foreign bodies by sea stars (Asterias rubens). Biological Bulletin. (Coming out in April)
This started as a student project to develop methods for studying the evolutionary demography of starfish, but when it became clear the animals wouldn't stay tagged, my students decided to investigate why. Their result was cool enough that we're publishing it.

3. Oravecz Z, Levitis DA, Faust K, Batchelder WH. (2015) Studying the existence and attributes of consensus on psychological concepts by a cognitive psychological model. American Journal of Psychology 128: 61-75.
My most cited paper (on the biological meaning of the word behavior) is one I started as a graduate student, even before it became clear I would be an evolutionary demographer. It got a nice write-up in the New York Times. Many of those citing it are in philosophy or psychology. A couple of years ago I was contacted by some psychologists who wanted to work with me to reanalyze those data. I never expected to publish in a psychology journal.

4. Zimmerman K, Levitis D, Addicott E, Pringle A. (2014) Maximizing the mean nearest neighbor distance of a trait to choose among potential crosses and design a fully crossed mating experiment.
This methods paper, currently out for review but with an earlier version already archived online and therefore available (journals are increasingly okay with this) grew out of a collaboration that is part of my ontogenescence project. In trying to answer my evolutionary question, my collaborator invented a new method for designing mating experiments, and we wrote it up. 

5. On raven populations in the Eastern US. One reviewer loved it just as it was, the other made numerous (and useful) comments on how to improve the analysis. Being worked on by my colleagues who are primarily responsible for the analysis.

6. Part of the same ontogenescence collaboration as #4, this was just rejected by a high impact journal on the basis that they rejected it (no reason or feedback given, as is common with such journals) and will be submitted to another in April.

7. Another ontogenescence paper, this time in a marine ecology context. Our plan is to submit in May. Between now and then the main order of business is to get feedback from colleagues and use it to improve the text.

8. Same project as #s 4 and 6.

9. On post-reproductive lifespan, building on the papers and methods that came out of my dissertation. We have cool results proving an interesting point, but it still needs a fair bit of work.

They probably won't be submitted in exactly this order, as a lot of it depends on factors beyond my control, but this is more or less the order I'm prioritizing them in. Beyond that it is hard to predict. Some older things I still do really need to write up, some fruitful student projects on post-reproductive lifespan that are looking good, some vague ideas. 

One thing I've decide is that at least for the moment, no papers that are outside the main foci of my research program (evolution of pre-reproductive mortality and post-reproductive survival) are going to make the list. Numbers 1-3 & 5 above don't directly address either of these topics, and 4 is tangential. That is a bad habit, and one I'm going to break.

Saturday, January 24, 2015

Choosing peer reviewers

Several years back, I was unbearably excited to be, for the first time, submitting my own manuscript to a real scientific journal. I'd spent months polishing it, and was totally confident it should be published. Working through the various steps of submission (input author names and contact info, input title, input abstract, keywords, ect.) I was surprised to be asked to input names and contact info for three recommended reviewers. Defendants in courts don't get to recommend specific peers to serve on their juries, how could science be served by asking me to recommend peers to evaluate my work? Confused, I emailed one of my advisers, who happens to be an outspoken crank when it comes to all of officialdom, and came away with the impression that this was not to be taken too seriously. I hastily plugged my keywords into Google and chose three prominent names I had never heard of before, who went on to tell the journal my paper was technically sound but not of the greatest interest. I was extremely lucky in that the editors of the journal found it very interesting, and published it anyway. Failing to have learned my lesson, a few papers I submitted since then were sent to unfavorable reviewers that I had recommended.

Now, journal editors can decide to ignore these recommendations, and those invited to review can say no, but in a large portion of cases (I have no data, this is simply a strong impression), some or all of those recommended end up writing reviews. The reason journals ask for such recommendations is to help the journal editors quickly find people who are highly qualified to review. I already know who is very knowledgeable about my specific topic, while the journal editors may not. Many journals have editors who are not paid anything for their service, and all have editors whose time is limited. Asking for and using these recommendations saves time, and probably helps avoid unqualified reviewers.

Doing so, however, has some pretty clear corrupting influence. Those who are good at this game pay great attention to whom they recommend, not only carefully considering the knowledge, viewpoints and interests of those they will recommend, but crafting the paper to raise as few objections as possible from these individuals. Papers that defy much of what is well established in one field regularly are published based on the recommendations of reviewers whose knowledge comes from another branch, and this fact is not ignored in making these recommendations. If the paper is very likely to be sent to a particular reviewer, that person's terminology and definitions will be used and his or her papers referenced. At scientific meetings, people will say, "I like this idea, list me as a reviewer." At its worst, peer review is reduced to a popularity contest, with well established authors having their work (not only journal articles, but also grant applications) evaluated mostly by their friends and allies.

Mixing moral distaste, political naivity, and hurry, I have generally spent no more than a few minutes on the question of who my recommended reviewers will be. The latest iteration of this came with a paper on which two of my undergraduate students are lead authors. We submitted it to one journal with a recommended list of reviewers that was hastily thrown together, experts on the organism we were working on but not necessarily interested in our particular topic. They trashed it solely on the basis that no one cared. We submitted the same manuscript, with very few edits, to another journal, listing reviewers with some knowledge of the organism, but a strong interest in questions related to our own. We got back three extremely positive reviews, praising our highly original and relevant work, recommending several very minor changes and urging the editors to publish this paper. Yesterday morning we submitted these revisions, and yesterday evening this journal officially accepted our paper.

So I've not only learned my lesson, but decided to heed it. Whom I ask to review a paper is (depressingly) almost as important as the quality of the work. I see truly terrible papers come out in excellent journals, presumably approved by carefully chosen reviewers, and some very good papers get rejected by less selective journals, in part because of poorly considered recommendations. From now on, I will put my qualms aside and think carefully, early in the process, about whom I will recommend as reviewers. After all, everybody else is doing it.

Friday, January 23, 2015

Very pleasing

There is something peculiarly satisfying about publishing an experiment which has, for its central instrumentation, a small magnet suspended by a human hair.

Wednesday, January 21, 2015

Good practice

Always order paper reprints for your undergraduate coauthors to give to their parents.

Wednesday, September 24, 2014

Rejection

Getting a paper rejection from a journal is always frustrating. So much time, effort and care goes into a paper that to have anonymous strangers say its no good can't help but hurt. I've just had a paper rejected, and it still hurts even though they had nothing bad to say about the paper.

The starfish paper I wrote with my students documents something that hasn't been documented before, but is certainly not the world's most important paper. It doesn't fit neatly into any field or derive from the pressing questions in any literature. So we sent it to a journal that explicitly says they don't care if it is important, so long as it is original, technically sound research. The reviewers agree that is passes these hurdles, but question its importance to their field. On this basis alone, the academic editor rejected it. I can't say I'm entirely surprised, as PLOS ONE has become a relatively high-impact journal. That type of success naturally brings them to function like a more traditional print journal competing for the flashiest papers. I filled out their feedback form to suggest that they update their stated criteria for acceptance, but won't otherwise protest.

There are two upsides to all this. The reviewers found the paper convincing and novel, with no technical or language faults. So we just need reformat for another journal and submit it there. Perhaps more importantly, this gives my student co-authors a window into yet another aspect of the scientific process that most of their peers never see.



Monday, August 04, 2014

To-write-list


I keep a running list of papers to write on my whiteboard.

This list currently has 15 entries.

Two of these have been accepted for publication this year but not yet taken off my list because it is nice to see them there.
 
Another was submitted, rejected, and is waiting rewriting.

Two have nearly complete manuscripts written and have been sent to colleagues for comments.

Five more are actively being worked on by me or co-authors and will hopefully be finished by the end of the year.

Four others are collaborations that are currently not moving. One of those is a project died for some specific reasons (like the data couldn't answer the question) and I just haven't entirely decided what to do about it. The others are things that neither I nor my collaborators are currently prioritizing that I hope we will get back to.

Finally, I have one project on my list that I want to write, but I don't even know exactly what data I need or the approach I want to take. I just know it is an important paper to write.

There are roughly a million other ideas I could put on my list, but I'm having a moratorium. I have decided I am not allowed to start on anything else until my list gets a lot shorter. As I have mentioned, I am applying for jobs, and I'd like to have most of this backlog cleared up before I go.

Thursday, July 17, 2014

Too consise?

I'm preparing a manuscript to submit to Nature. In addition to a low acceptance rate (8%), they have severe word limits. The 'main text' should be "about 1500 words" excluding introductory paragraph, references, methods, figure legends, etc. It is the intro, summary of results, and the discussion all in one. My main text, of which I now have a complete but not final draft, is 1138. It was not hard to make it this short. This is just how long I wrote it. This of course makes me think that there are all sorts of vital things that I should have said but left out. I will think about what these might be after my wife and daughter are asleep.

Thursday, May 08, 2014

One way to avoid writing negative reviews is to decline to act as reviewer

"Please tell us why you have declined to review this article"

I am afraid I am unable to decipher the meaning of even the title of this paper, and the abstract implies a communicative style that could charitably be called idiomatic. I know all the words being used, (and am familiar with the probable subject matter) but couldn't tell you what most of them mean in this context. Were I to review a paper written in this manner I would spend an excessive amount of time on these linguistic issues, and might or might not ever get around to figuring out the embedded science.

Monday, February 11, 2013

Better measures of tiny bits of living jelly

My student and I had a paper accepted last week. I like this paper a lot. First some official details, then a short story about how the paper came to be.


Forthcoming in "Marine and Freshwater Research" 

The consistent, non-destructive measurement of small proteiform aquatic animals with application to the size and growth of hydra 

Daniel Levitis and Josephine Goldstein

Abstract: Hydra (Cnidaria), the basal metazoan most often studied in cellular, molecular and developmental biology, is difficult to measure because it is small, proteiform and aquatic. To facilitate broader organismal and ecological study of Hydra, we evaluate three methods whereby a polyp's body column can be measured by means of photomicroscopy. The volume, cylindrical surface area and surface area corrected for changes in body shape are all highly repeatable methods (r=0.97) when shape varies little. However, shape changes alter volume and cylindrical surface area. Repeated corrected surface area measures of the same individuals in markedly different positions yield standard deviations that are less than 5% of the mean measured area. This easy, non-lethal means of individual size measurement explicitly accounts for the flexible morphology of a polyp's hydrostatic skeleton. It therefore allows for the elucidation of how growth and size vary over time, age and food intake. We find that hydra change size dramatically day to day, and that while food level influences adult size, it has little effect on the early growth of recently detached buds. Finally, we discuss ecological and biological applications of this method. 

Part of why I like this paper so much is that my student did most of the hard parts, and the reviewers didn't ask for a lot of changes, and the editor (Dr. Russell Death) moved things along quickly and efficiently, so it is the first paper I've published that didn't feel like pulling my own teeth. It is a solid methods paper. It says, "here is a method for doing something that many scientists may want to do, that we didn't know a good way to do before."

In particular, it is a simple way to accurately and easily measure a tiny transparent aquatic animal with no hard parts or consistent shape without harming it.  Take a picture of it and with a bit of simple math calculate its surface area. Simple, elegant, inexpensive, non-invasive, biologically meaningful, everything I hoped it would be.

But the method presented in the paper is a substantially different, and better, method than the one we started out with. In fact, we were almost ready to submit the paper when we changed the method drastically.

I started out just wanting to measure a hydra polyp because it was necessary for another research project. Nobody had a method that worked halfway decently without killing the hydra. A hydra's body, although it shortens and lengthens, bends and twists is almost always roughly a deformed cylinder. So I said, "Hey Josi" (that's my student), why don't we develop a method to measure a hydra by photographing it and estimating its volume as though it was cylinder?" She agreed, and off we went, taking photos of hydra during our whole research project, for almost two years. We finished gathering our data, and found that indeed we could measure hydra this way. It didn't work great, but it kinda worked. You could use it to tell the difference between a really huge hydra and one that was just kind normal. Viola, unimpressive but probably publishable methods paper.

We wrote it up and were pretty close to submitting when I decided to see what hydra focussed papers had come out recently, and found that someone else had just published almost the exact same method in a nice paper with an interesting biological point. Our formulation of that method didn't work any better than hers did, and we had no point beyond "here is how to measure." We couldn't publish the same method again, even if we came up with it independently. After some cursing and self-recriminations for being so slow, I decided to see if it was possible to salvage anything of the methods paper. Josi took another set of photos and I used them to repeatedly adjust my formula, with limited biological reasoning, until something worked better than the formula just published. In fact, one formula I came up with by eyeballing my graphs worked much better than the method just published. So much better that the calculated value hardly changed at all even when the shape of the hydra changed drastically.

Now you are probably saying to yourself that this is blatant cheating. Trying different formulas (perhaps 100 of them) until something gives you the result you want is a pretty sure way to get the result you want, if you are persistent enough. But two things combined to make this not just okay, but beautiful. First, the formula I stumbled upon made obvious biological sense, even before I knew it worked. This formula represents a hydra as a roughly cylindrical bag whose skin stretches as it elongates itself, or folds and ripples as it contracts. In other words, it describes a hydra accurately and reveals something I didn't previously know about the way a hydra moves. Secondly, and as importantly, when I applied the formula to the main data set, which I didn't use to develop the formula, it still gave a highly consistent measurement for any individual, even as the shape of the individual changed. The method in fact works for completely different populations of hydra. You could take two hydra that looked the same size under the microscope and conclusively decide that one was bigger than the other. You could tell how much a young hydra grew each day. You could really measure the buggers, eliminating most of the noise inherent to previous methods.

Still more lovely, the method Josi and I had just developed could make use of the photos and measurements we had already taken, so redoing all our calculations and figures required Josi to write only a few extra lines of code (Thank you Josi, thank you R). We did a little rewriting to compare our method favorably to our other method, blamed attributed the other method on to the person who had just published it, made our biological argument to explain why the method worked, and we had a drastically improved paper ready to submit. There is a lesson in this somewhere about the scientific method.

Wednesday, December 12, 2012

Unbe-fricken-lievable incom-shitforbrains-petence

So we just, finally, got the re-typeset copy of our previously mangled article back. The good news is that relatively few modifications were made this time. The bad news is that the production editor sent  the wrong version of the paper for typesetting. This is the draft we sent them last summer, rather than the up-to-date version from this fall. Where the hell did they find this guy?

Monday, October 15, 2012

In which I use some strong words on the topic of a botched editing job

So this big long review article my collaborators and I sent to a very good scientific journal was sent for one last copy-edit before it was published. This is normal, most journals have a copy editor look things over right before typesetting. This particular paper had already been professionally copy edited, but hey, there is always room for improvement. So we get back the typeset version from the production editor (who works for the corporate publisher of the journal and is in charge of turning the accepted paper into a formatted, typeset publication), and an hour later, we get an email from the scientific editor (who is a scientist in charge of deciding what gets published and makes decisions about scientific content) that the copy editor (who works for the production editor and is only supposed to correct grammar, punctuation and such) is an "aggressive copy editor" and we should check the paper carefully. So we sit down to read through the paper and figure out what this means when HOLY SHIT! I notice that the very first sentence of our paper directly contradict the rest of its content. And then YOU ARE FUCKING KIDDING ME?! I notice that my coauthor's name is spelled wrong. Then LET LOOSE THE DOGS OF WAR! I see that the very central sentence of the paper, the one that defines the really important concept that we are introducing and talking about, no longer means anything at all. And then, ARE YOU TRYING TO KILL ME?! I see that a paper written by a very senior colleague who I've only slightly corresponded with is now attributed to me, as though I wrote it. To top it off ARE YOU SOME KIND OF IDIOT?! there are now all sorts of punctuation errors, formatting errors, random characters inserted in the middles of words. So I think maybe that's the worst of it, and then HOW HAVE YOU NOT BEEN FIRED YET?! I figure out that the copy editor went through and sorta tried to rewrite the paper, adding a sentence here, taking out a clause there, HOW IS THIS POSSIBLE inserting parenthetical phrases with no closed parenthesis. (I DON'T UNDERSTAND! Oh, and MAKE IT STOP! the names of the demographic phenomena have been changed to something the copy editor thought sounded nicer.
So we figure maybe the copy editor was high on crack, and we write to the production editor, whose job it is to make sure the copy editing was done right, and we politely explain the problem, and ask that we would like to make sure that we don't miss any errors that may have been introduced, and so could we please have the list that he has of the changes that were made, and he SATAN! SATAN! writes back a one sentence email telling us we just need to follow the instructions he already sent. Considering I spent more than a year in total working on this paper, I think I am handling it rather well. DOOM! DOOM!

I have now written a somewhat less polite email to the production editor demanding that the pre-vandalized version be used, as we can't possibly find and mark all of the hundreds of places where the paper was damaged. My hope is to end up working with a different production editor, one who is not so BLATTANTLY STUPID unconcerned about the quality of the product.

Tuesday, April 03, 2012

Accepted, but...

We have set ourselves a difficult task. Some months ago, my friends and I submitted a review article, clarifying some common (within the scientific literature) misconceptions, to a very good anthropology journal. Today we heard back from them. The editor explained that it took longer than usual because he had sent it out to "several" reviewers, and then "several more" and was waiting to get comments from all of them.

The good news is that all the reviewers seemed to like it, and the editor knows which issue of the journal he wants to put it in, which we take as the paper being accepted. The bad news (or at least time-consuming news) is that all of the several and several reviewers made long lists of things they would like to see changed, added, clarified or reorganized. I have not yet finished reading all these comments, but I don't see a lot of repetition, meaning that we have hundreds of distinct comments and criticisms to deal with. In the months we were waiting, we also showed the draft to a couple of colleagues, who gave us still different but also very useful comments. The good news is that the editor has given us permission to go well beyond the usual page limit for this journal in order to deal with all the reviewer comments. The bad news is that we now can't use space limitations as an excuse for not dealing with relevant points or citing relevant literature. So we have a great deal of rewriting to do.

Part of the problem with writing an article pointing out places where other people's thinking or language has been unclear is that one has to live up to very high standards for clarity in one's thinking and language. We've already extensively rewritten this paper a few times, and each time it has gotten clearer. Nevertheless, a large portion of the reviewers' comments are right on, and further clarification is needed. By the time this thing comes out it will either be brilliant or a total muddle. I'm not clear which.

Thursday, January 05, 2012

Authorship code

I'm writing a paper with two of my students. Well, I'm writing it with one of them and another one did a lot of work on the statistics. But today we had to straighten out the order that the authors would be listed in on the paper. This can be a contentious issue, and I know of cases in which papers did not getting written at all because the authors couldn't agree on who got to be listed first. Some big multi-author papers simply list everyone in alphabetical order to avoid the fuss, but then Dr. Aardvark always gets to be first author. Some journals have a little section where each author's contribution is described, but they usually end up saying something uninformative and false like "all authors contributed equally."

I came up with most of the ideas in the current paper, put things together, decided who would do what, etc. My student did much of the lab work and is doing much of the actual writing. Given this, most biologists would propose what I did, and what my student objected to: She (as the person doing the writing) should come first, I (as the senior person on the paper) should come last, and everyone else (in this case meaning the statistics student) gets sandwiched in between. This was very counterintuitive for my student; she thought I was trying to minimize my own role by putting myself last. In fact, it is a step up for me to be writing papers in which I am in that last position. I remember this being counterintuitive for me the first time it was explained to me. I was a college student, and a boss said that he'd make me second author on a paper. I said something to the effect that I'd be glad to be even last author, which I thought was being humble, but he took it as me saying the paper had been my idea. We straightened out the miscommunication but I didn't end up being listed as author on the paper. That the last author spot (at least in biology) signifies the senior author is a code biologists internalize, and I had to think back a long way to figure out why my student objected to me being last author. I explained, she reluctantly believed, and now it is settled.

Sunday, December 18, 2011

The end is neigh!

When I start working on a project, it is always interesting and exciting, otherwise I wouldn't start working on it. Developing ideas is fun, narrowing down the question to just the core issue is challenging, building an apparatus or simulation or whatever feels productive and creative. But I also have to actually write a paper which frame the whole thing in a useful way, which means reviewing and summarizing the relevant literature, which is boring unless I actually have something novel to say about that literature, other than what my new experiment or analysis or whatever shows. And I have to make my paper, which is always interdisciplinary in some way, fit into a particular journal, most of which are tightly disciplinary. So I'm much better at starting papers than finishing them, and I have a tremendous backlog of papers to get out. It is therefore a great relief to have just sent out what was originally (three years ago) a response to another paper in which I saw some methodological shortcomings, that turned into a broad comparative analysis of human and primate demography, and ended up as a review article submitted yesterday to an anthropology journal. One of my co-authors is a bone fide anthropologist, so I feel confident that we at least refer to most of the right papers. I'm also confident that our points are valid and important and the paper generally well written. But mostly I just feel happy that if all goes well I will soon no longer have to work on this paper.

Thursday, March 31, 2011

Skipping that hard middle part

The part of science I like best is the coming up with questions part. The part where I ask something someone else has never asked before (or at least I don't yet know that they did) and brainstorm together a plan about how the question could be answered. But my second favorite part is writing rough drafts. I like the rough drafts in particular because I can let the ideas flow, without getting hung up on making sure I have exactly the right reference or my font is just the one the journal prefers. In other words, it is writing without the impeding mechanical details.

I can do lab work, I can program a simulation, I can edit bibliography formats in citation management software, and all those other jobs that require extensive attention to details beyond the scientific concepts. But by preference I'm really a concept guy. If I had collaborators who wanted to do every part of the process between the planning and writing the rough draft, I'd be thrilled.

This is why I'm writing a Forum piece, to submit to a journal that responded positively to my pre-submission inquiry. (A positive response means they are willing to look at it, not that they promise to publish it.) Their Forum section is designed for short papers of about a 1000 words, in which the author makes a relatively simple point or poses a question without a lot of new data. I've finished a rough draft of 1100 words in the last two days. Now comes the less fun part of editing it for clarity, making sure all the papers I cite actually say what I claim they say, getting feedback from colleagues and editing it again. A more adulterated and repetitive form of creativity. Still, I think I would be inclined to write more papers in this format, as it makes a nice compact project.

Sunday, March 27, 2011

What to do?

I have an ethical situation here. I've received a fraudulent post-doctoral application. This applicant has, on his CV, listed quite a few publications in peer reviewed journals. Some of them are real. Others simply don't exist. There are no papers by this author in the listed journals. There are no publications with the listed titles in any journal. One of the imaginary papers is supposed to be in Nature, but the listed volume number doesn't even correspond to the listed year, and so on. I am very confident that the CV is fraudulent.

The question then is what, if anything, should I do about this? I don't really want to waste a lot of my time, but this is fairly serious misconduct. I could write to him and request copies of the papers, or an explanation of Figure 2 in the Nature paper. I could tell his boss on him, assuming he actually works where he says he does. I could simply reject the application. I'm not certain what, if anything, is the standard response to this situation. There are confidentiality rules that apply to job applications, even fraudulent ones, so public humiliation is out of the question.