exhibit a:

h/t retractionwatch blog and PhysioProffe.

context.

As we all know, much of the evaluation of scientists for various important career purposes involves the record of published work.

More is better.

We also know that, at any given point in time, one might have work that will eventually be published that is not, quiiiiiite, actually published. And one would like to gain credit for such work.

This is most important when you have relatively few papers of “X” quality and this next bit of work will satisfy the “X” demand.

This can mean first-author papers, papers from a given training stint (like a 3-5 yr postdoc) or the first paper(s) from a new Asst Professor’s lab. It may mean papers associated with a particular grant award or papers conducted in collaboration with a specific set of co-authors. It could mean the first paper(s) associated with a new research direction for the author.

Consequently, we wish to list items that are not-yet-papers in a way that implies they are inevitably going to be real papers. Published papers.

The problem is that of vaporware. Listing paper titles and authors with an indication that it is “in preparation” is the easiest thing in the world. I must have a half-dozen (10?) projects at various stages of completion that are in preparation for publication. Not all of these are going to be published papers and so it would be wrong for me to pretend that they were.

Hardliners, and the NIH biosketch rules, insist that published is published and all other manuscripts do not exist.

In this case, “published” is generally the threshold of receiving the decision letter from the journal Editor that the paper is accepted for publication. In this case the manuscript may be listed as “in press“. Yes, this is a holdover term from the old days. Some people, and institutions requiring you to submit a CV, insist that this is the minimum threshold.

But there are other situations in which there are no rules and you can get away with whatever you like.

I’d suggest two rules of thumb. Try to follow the community standards for whatever the purpose and avoid looking like a big steaming hosepipe of vapor.

“In preparation” is the slipperiest of terms and is to be generally avoided. I’d say if you are anything beyond the very newest of authors with very few publications then skip this term as much as possible.

I’d suggest that “in submission” and “under review” are fine and it looks really good if that is backed up with the journal’s ID number that it assigned to your submission.

Obviously, I suggest this for manuscripts that actually have been submitted somewhere and/or are out for review.

It is a really bad idea to lie. A bad idea to make up endless manuscripts in preparation, unless you have a draft of a manuscript, with figures, that you can show on demand.

Where it gets tricky is what you do after a manuscript comes back from the journal with a decision.

What if it has been rejected? Then it is right back to the in preparation category, right? But on the other hand, whatever perception of it being a real manuscript is conferred by “in submission” is still true. A manuscript good enough that you would submit it for consideration. Right? So personally I wouldn’t get to fussed if it is still described as in submission, particularly if you know you are going to send it right back out essentially as-is. If it’s been hammered so hard in review that you need to do a lot more work then perhaps you’d better stick it back in the in preparation stack.

What if it comes back from a journal with an invitation to revise and resubmit it? Well, I think it is totally kosher to describe it as under review, even if it is currently on your desk. This is part of the review process, right?

Next we come to a slightly less kosher thing which I see pretty frequently in the context of grant and fellowship review. Occasionally from postdoctoral applicants. It is when the manuscript is listed as “accepted, pending (minor) revision“.

Oh, I do not like this Sam I Am.

The paper is not accepted for publication until it is accepted. Period. I am not familiar with any journals which have accepted pending revision as a formal decision category and even if such exist that little word pending makes my eyebrow raise. I’d rather just see “Interim decision: minor revisions” but for some reason I never see this phrasing. Weird. It would be even better to just list it as under review.

Final note is that the acceptability of listing less-than-published stuff on your CV or biosketch or Progress Report varies with your career tenure, in my view. In a fellowship application where the poor postdoc has only one middle author pub from grad school and the two first author works are just being submitted…well I have some sympathy. A senior type with several pages of PubMed results? Hmmmm, what are you trying to pull here. As I said above, maybe if there is a clear reason to have to fluff the record. Maybe it is only the third paper from a 5 yr grant and you really need to know about this to review their continuation proposal. I can see that. I have sympathies. But a list of 8 manuscripts from disparate projects in the lab that are all in preparation? Boooo-gus.

Thought of the Day

September 6, 2013

We must tread lightly when equating what represents enough work for a publication to either dollars or hours spent.

But if the standard for reasonable productivity under a grant award (such as the R01) is, say, 6+ papers, and reviewers and editors think a single pedestrian paper should contain most of what is proposed in that entire award, then someone is not playing with a full deck.

On showing the data

September 5, 2013

If I could boil things down to my most fundamental criticism of the highly competitive chase for the “get” of a very high Impact Factor journal acceptance in science, it is the inefficiency.

GlamourDouchery of this type is an inefficient way to do science.

This is because of several factors related to the fundamental fact that if the science you conduct isn’t published it may as well never have happened.

Science is an incremental business, ever built upon the foundations and structures created by those who came before. And built in sometimes friendly, sometimes uneasy collaboration with peers. No science stands alone.

Science these days is also a very large enterprise with many, many thousands of people beavering away at various topics. It is nearly impossible to think of a research program or project that doesn’t benefit by the existence of peer labs doing somewhat-related work.

Consequently, it is a near truism that all science benefits from the quickest and comprehensive knowledge of what other folks are doing.

The “get” of an extremely high Impact Factor Journal article acceptance requires that the authors, editors and reviewers temporarily suspend disbelief and engage in the mass fantasy that this is not the case. The participants engage in the fantasy that this work under consideration is the first and best and highly original. That it builds so fundamentally different an edifice that the vast majority of the credit adheres to the authors and not to any part of the edifice of science upon which they are building.

This means that the prospective GlamourArticle authors are highly motivated to keep a enormous amount of their progress under wraps until they are ready to reveal this new fine stand-alone structure.

Otherwise, someone else might copy them. Leverage their clever advances. Build a competing tower right next door and overshadow any neighboring accomplishments. Which, of course, builds the city faster….but it sure doesn’t give the original team as much credit.

The average Glamour Article is also an ENORMOUS amount of work. Many, many person years go into creating one. Many people who would otherwise get a decent amount of credit for laying a straight and true foundation will now be entirely overlooked in the aura of the completed master work. They will never become architects themselves, of course. How could they? Even if they travel to Society Journal Burg, there is no record of them being the one to detail the windows or come up with a brilliant new way to mix the mortar. That was just scut work for throwaway labor, don’t you know.

But the real problem is that the collaborative process between builders is hindered. Slowed for years. The dissemination of tools and approaches has to wait until the entire tower is revealed.

Inefficiency. Slowness. These are the concerns.

Sure, it is also a problem that the builders of the average Glamour Article tower may not share all their work even after the shroud has been removed. It would be nice to let everyone know just where the granite was found, how it was quarried and something about the brand new amazing mortar that (who was that anonymous laborer again? shrug) created. But there isn’t really any pay for that and the original team has moved on. Good luck. So yes, it would be good to require them to show their work at the end.

Much, much more important, however, is that they show each part of the tower as it is being created. I mean, no, I don’t think people need to work with a hundred eyes tracking their every move. I don’t think every little mistake has to be revealed, nor do I think we necessarily need to know how each laborer holds her trowel. But it would be nice to show off the foundation when it is built. To reveal the clever staircase and the detailing around the windows once they are installed. Then each sub-team can get their day in the sun. Get the recognition they deserve.

[And if they are feeling a little oppressed, screw it, they can leave and take their credit with them. And their advances in knowledge can be spread to another town who will be happy to hire this credentialed foundation builder instead of some grumpy nobody who only claims to have built a foundation.]

The competition for Glamour Article building can’t really catch up directly, after all it takes a good bit of work to lay a foundation or create a new window design. They can copy techniques and leverage them, but there is less chance of an out and out scoop of the full project.

So if the real problem is inefficiency, Dear Reader, the solution is most assuredly the incremental reveal of progress made. We don’t need to watch the stirring and the endless recipes for mortar that have been attempted, we just need to know how the successful one was made. And to see the sections of the tower as they are completed.

Ironically enough, this is how it is done outside of GlamourCity. In Normalville, the builders do show their work. Not all of it in nauseating detail but incrementally. Sections are shown as they are completed. It is not necessary to wait for the roof to be laid to show the novel floorplan or for the paint to be on to show the craft that went into the floor joists.

This is a much more efficient way to advance.

It has to be, since resources are scarce and people in Society Burgh kind of give a shit if one of their neighbors is crushed under a block of limestone. And care if an improperly supported beam cracks and they have to get a new one.

This is unlike the approach of Glamour City where they just curse the laborer and draft three new ones to lift the block into place. And pull another beam out of their unending pile of lumber.

Discussion

September 5, 2013

It’s been a bit since I pontificated on discourse. (I know PhysioProffe really misses these types of blather.) I do recommend you read those prior comments.

For today though, a more conciliatory note.

While we might ferociously stick to our position, talking points and arguments in certain scenarios, if we really genuinely want to advance a discussion this can be unwise.

It is essential to drop your position and pugnacity for a second or two and really, genuinely consider where the other person is coming from.

To walk the proverbial mile in their shoes.

And above all else, to think hard about how your stance and opinions appear to other people. This requires including how they perceive you instead of how you perceive yourself.

It can also help to credit the other person’s concerns as if they were as important to them as your concerns are to you. Because chances are this is indeed the case.

I find myself in yet another knock down argument with a guy who, I am pretty sure, I share a lot of fundamental concerns with. On the face of it.

Yet I am convinced this guy is almost pathologically unable to genuinely recognize and consider the viewpoint and circumstances of other people.

There are generally two reasons for this.

First, a sort of overweening personal arrogance that, I am sad to report, is endemic to academics. This is the sort of arrogance born of a lifetime of being smarter than most other people, burnished by happening into a position of (modest, this is academics, mind) power in which many people do not challenge you. Underscored by a profession in which, despite the credit being supposed to come from the work you have done, obsessively views accomplishments as the subsidiary outcome of personal worth.

I don’t think, after a few go-rounds with this fine chap, that this is the problem.

This leaves me with the second problem. Wherein the inability to budge off talking points, the refusal to see complexity of human trajectories and the blindness to others’ lived experience comes from a theological adherence to a higher calling.

Religion, in essence.

It does funny things to people.

I do my fair share of preaching around this blog. And I do my fair share of sticking to my talking points.

But anyone who has been around long knows that what I’m really addicted to is the differential lived experiences of those of you more or less in the broader envelopes of academics, academic science and particularly the subfields that fall under the broad scope of Biology.

I am addicted to walking the mile in your smelly-arsed shoes folks.

Show me the data, Jerry!!!!!!

September 3, 2013

Today’s Twittsplosion was brought to you by @mbeisen:

he then elaborated

and

and

There was a great deal of distraction in there from YHN, MBE and the Twitteratti. But these are the ones that get at the issue I was responding to. I think the last one here shows that I was basically correct about what he meant at the outset.

I also agree that it would be GREAT if all authors of papers had deposited all of their raw data, carefully annotated, commented and described (curated, in a word) with all of the things that I might eventually want to know. That would be kickass.

And I have had NUMEROUS frustrations that I cannot tell even from methods sections what was done, how the data were selected and groomed, etc in many critical papers.

It isn’t because I assume fraud but rather that I find that when it comes to behaving animals in laboratory studies that details matter. Unfortunately we all wish to overgeneralize from published reports….the authors want to imply they have reported a most universal TRUTH and other investigators wish to believe it so that they don’t have to sweat the details.

This is never true in science, as much as we want to pretend.

Science is ever only a description of what has occurred under these specific conditions. Period. Including the ones we’ve bothered to describe in the Methods and those we have not bothered to describe. Including those conditions of which we have no knowledge or understanding that they might have contributed.

Let us take our usual behavioral pharmacology model, the 10 m Hedgerow BunnyHopper assay. The gold standard, of course. And everyone knows it is trivial to speed up the BunnyHopping with a pretreatment of amphetamine.

However, we’ve learned over the years that the time of day matters.

Until…finally….in its dotage seniority. The Dash Lab finally fesses up. The PI allows a trainee to publish the warts. And compare the basic findings, done at nighttime in naive bunnies, with what you get during the dawn/dusk period. In Bunnies who have seen the Dash arena before. And maybe they are hungry for clover now. And they’ve had a whiff of fox without seeing the little blighters before.

And it turns out these minor methodological changes actually matter.

We also know that dose response curves can be individual for amphetamine and if the dose is too high the Bunny just stims (and gets eaten by the fox). Perhaps this dose threshold is not identical so we’re just going to chop off the highest dose because half of them were eaten after that dose. Wait…individuals? Why can’t we show the individuals? Because maybe a quarter are speeded up by 4X and a quarter by 10X and now that there are these new genetic data on Bunny myocytes under stressors as diverse as….

So why do the new papers just report the effects of single doses of amphetamine in the context of this fancy transcranial activation of vector-delivered Channelrhodopsin in motor cortex? Where are the training data? What time of day were they run? How many Bunnies were aced out of the study because the ReaChr expression was too low? I want to do a correlation, dammit! and a multivariate analysis that includes my favorite myocyte epigenetic markers! Say, how come these damn authors aren’t required to bank genomic DNA from every damn animal they run just so I can ask for it and do a whole new analysis?

After all, the taxpayers paid for it!

I can go on, and on and on with arguments for what “raw” data need to be included in all BunnyHopping papers from now into eternity. Just so that I can perform my pet analyses of interest.

The time and cost and sheer effort involved is of no consequence because of course it is magically unicorn fairy free time that makes it happen. Also, there would never be any such thing as a protracted argument with people who simply prefer the BadgerDigger assay and have wanted to hate on BunnyHopping since the 70s. Naaah. One would never get bogged down in irrelevant stuff better suited for review articles by such a thing. Never would one have to re-describe why this was actually the normal distribution of individual Hopping speeds and deltas with amphetamine.

What is most important here is that all scientists focus on the part of their assays and data that I am interested in.

Just in case I read their paper and want to write another one from their data.

Without crediting them, of course. Any such requirement is, frankly my dear, gauche.

In Science, from Sandra L. Schmid, Ph.D. [PubMed] who is Chair of Cell Biology at UT Southwestern.

The problem:

CVs provide a brief description of past training—including the researcher’s pedigree—as well as a list of awards, grants, and publications. A CV provides little insight into attributes that will ensure future success in the right environment. For example, a CV is unlikely to reflect the passion, perseverance, and creativity of individuals who struggled with limited resources and created their own opportunities for compelling research. Nor is a CV likely to identify bold and imaginative risk-takers who might have fallen—for the moment—just short of a major research success. The same is true for those who found, when they realized their goal, that their results exceeded the imaginations of mainstream reviewers and editors, the gatekeepers of high-profile journals. Finally, for junior hires at early stages of their careers, a CV is unlikely to reveal individuals who are adept at recombining knowledge and skills gained from their graduate and postdoctoral studies to carve out new areas of research, or those able to recognize and take advantage of unique opportunities for collaboration in their next position.

Her Department’s solution:

We will be asking applicants to write succinct cover letters describing, separately and briefly, four elements: (1) their most significant scientific accomplishment as a graduate student; (2) their most significant scientific accomplishment as a postdoc; (3) their overall goals/vision for a research program at our institution; and (4) the experience and qualifications that make them particularly well-suited to achieve those goals. Each of the cover letters will be read by several faculty members—all cell biology faculty members will have access to them—and then we will interview, via video conferencing technologies, EVERY candidate whose research backgrounds and future interests are a potential match to our departmental goals.

She closes with what I see as a deceptively important comment:

Let’s run this experiment!

You have probably gleaned, Dear Reader, that one of my greatest criticisms of our industry is that the members of it throw all of their scientific training out the window when it comes to the actual behavior OF the industry. Paper review, grant review, assessment of “quality”, dealing with systematic bias and misdirection…… MAN we are bad at this.

Above all, we are reluctant to run experiments to test our deep seated beliefs. Our beliefs that GRE quantitative or verbal or subject predict grad school performance. Our beliefs that undergraduate GPA is the key or maybe it is research experience in a lab of some DewD we’ve heard of. Our belief that what makes the postdoc is X number of first author pubs in journals of just exactly this Impact Factor. Our confidence that past performance predicts future success of our new Asst Professor hire….or tenure candidate.

So often we argue, viciously, our biases. So infrequently do we test them.

So bravo to Chair Schmid for actually running an experiment.