The sole purpose of generating a review article that lays out your GrandeTheoryeEleven in the biomedical sciences is so that you can try to take sole credit for ideas that occur to many members of your subfield who are following the same literature that you do.

Please note that Nobel Prizes in Physiology or Medicine are not generally handed out for review articles.

Thought of the Day

October 10, 2012

If your success as a lab depends on concealing the “real way” to do some technique then your science sucks.

From this post at Scientific American:

Fowler and Aksnes (2007) did another research on the Norwegian database, but this time it was author rather than publication oriented. The percentage of author self-citation was rather low – 11% – but every self-citation yield, on average, 3.65 citations from others in 10 years. Fowler and Aksnes concluded that “self-citation advertises not only the article in question, but the authors in question.”

So yeah. Cite your own papers. Liberally.
__
Fowler, J. H., & Aksnes, D. W. (2007). Does self-citation pay? Scientometrics, 72 (3), 427-437 DOI: 10.1007/s11192-007-1777-2

For a given manuscript, how much patience do you have for getting it into the right journal?

Whether it be IF that you seek, or  the cachet of a specific journal in your field, how many tries before you are willing to submit it to a sure-thing, aka, dump journal?

There were some Tweeps the other day that mentioned 7 tries. Now I don’t know if this included resubmits and ultimate rejections or 7 different journals. but dayyum, people. 7?

I’m good for maybe 2 tries before I just dump it someplace I have high confidence will take it.

 

Perhaps I need a little more patience and/or fortitude?

 

 

 

is wrong.

Seriously? People are complaining that mentoring in academic science sucks now compared with some (unspecified) halcyon past?

Please.

…was put up in a vignette by Female Science Professor:

the other day, a (very) senior professor told me that he was upset about another colleague who doesn’t cite his (the senior professor’s work) when he should. He said that he wants his citation numbers to be as high as possible by the time he dies because “that’s all we have” (as a legacy). I thought that was sad and disturbing, particularly coming from someone who has a large number of papers that have been cited more times than any paper of mine will ever be.

sad and disturbing indeed. We have so much more in terms of legacy. Counting citations seems pretty…miserly.

This is absolutely BRILLIANT!

PsychFileDrawer.org is:

a tool designed to address the File Drawer Problem as it pertains to psychological research: the distortion in the scientific literature that results from the failure to publish non-replications. Most journals (especially high impact journals that specialize in publishing surprising findings that have low prior odds of being correct) are rarely willing to publish even carefully conducted non-replications that question the validity of a finding that they have published. Often the only people who learn about non-replications are those who happen to be “plugged in” to social networks that circulate this information in a fragmentary and inefficient way. Even textbook authors are rarely well informed about the replicability of the results that they report on, and may often rely upon results that are known to be dubious by those working in the area.

What a great idea. One of the reasons I recently held out as a justification for the LPU approach to publishing is the hoarding of not-enough-for-pub data out there that might save someone else a whole hell of a lot of time. Well, chasing after a supposed published finding as your control or launching point for new studies can land you in one of those little potholes. Wouldn’t it be nice to see a half dozen (or more) attempts to replicate an effect to (at the very least) tell you which are the key conditions and which can be manipulated for your purposes?

Other fields should try something like this.
__
Disclaimer: I’m professionally acquainted with one or more of the people apparently involved in this effort.

I was fascinated by several observations Gladwell made about the genius of Steve Jobs, late Apple honcho.

About him being an Editor, not a writer. About him refining the creations of others (as if a half a hundred hadn’t thought of a tablet computer before?). About him throwing a fit because one of his creative staff wanted more direction and less…editorializing.

There is much to ponder in here about the proper role of the PI that you work for, are, hate and/or want to become.

I am greatly enjoying reading this measured takedown

For example, the article on foxnews.com states, “Grad students often co-author scientific papers to help with the laborious task of writing. Such papers are rarely the cornerstone for trillions of dollars worth of government climate funding, however — nor do they win Nobel Peace prizes.” I will assume that the bit about “Nobel Peace prizes” was a mistake made by the Fox News writer, since as I’m sure you’re aware, scientific achievements do not lead to Peace prizes. Further, most science of any kind doesn’t lead to a Nobel Prize. They really don’t hand out that many of them.

But let’s de-construct this one a little more. Grad students often are the lead author on scientific publications, because they carried out the work. I know you feel that this shouldn’t be the case. How can they do science without a Ph.D?! Well, it turns out that’s how you get a Ph.D. By doing research that leads to publications.

of this variety of ignorant mewling about the conduct of science.

“We’ve been told for the past two decades that ‘the Climate Bible’ was written by the world’s foremost experts,” Canadian journalist Donna Laframboise told FoxNews.com. “But the fact is, you are just not qualified without a doctorate. In academia you aren’t even on the radar at that point.”

In academia, the people who are “on the radar” for any given topic are those who are most directly and deeply involved in the work. Sometimes that breadth and depth comes from a longer career in the field. Sometimes it comes because as a grad student you have done nothing else other than focus exclusively, think deeply and read exhaustively on a given topic. Ultimately, those who should be listened to most are those that know the most.

Academic credentials can be the marker, but are no substitute, for expertise.

Michael Eisen has an interesting post up today on a topic which comes up occasionally here on this blog. He blames peer review, but really it is an indictment of GlamourMag science. A criticism of the conflation of journal reputation with the quality of any article published therein.

One finger point is directed at the reviewer/editor demands for more data/studies/proof before a paper could be accepted. I agree with much of Eisen’s critique on this point.

What I am pondering today, however, is the tight NIH grant supply.

It strikes me that this is going to be a damn good thing if it stomps down on authors’ willingness to put up with unnecessary* reviewer demands for more work.

*the controls appropriate to evaluate the data as presented are fair game. “gee it would be cool if you also showed blahdeeblah…” are typically not.

This whole storify thing seems intriguing so I’m doing a test case. Nothing fancy and no editorializing. Just the stream at present. Read the rest of this entry »

I admit I got a little excited when I saw a Twitt RT’d earlier today from Noah Gray.

Soon to be in press: Strong evidence supporting the neurogenesis-depression hypothesis, from @jsnsndr: http://j.mp/qn6fyD

I’m sort of vaguely aware and following a literature that is trying to establish a causal link between depression and alterations in hippocampal neurogenesis, proliferation and general plasticity based on creating new, functioning cells.

This is generally a non-human literature, typically in mouse models and….highly correlative.

With respect to this latter, the state of the art for a long time has been to treat markers of neurogenesis (there are many stages and concepts here which I’ll glomp under one heading. Follow the link in the tweet to the Functional Neurogenesis blog for all your background reading) as dependent variables to be reported. Not manipulated. This is, perhaps obviously, the case for any post-mortem human brain analysis but also for many animal models to date. In essence, you do some thing to the animal and then look at the markers afterward. Then you report whether neurogenesis is up, down or unchanged. So far so good. But this approach doesn’t quite get at the question of causality which is important if we think that altering the effects on neurogenesis (say by a drug or behavioral therapy*) would have a beneficial effect on the affective disorder itself. It could, after all, be a result of depression with no causal role.

There is also a question of whether a given animal model is a valid representation of a human affective disorder. Here we can think about issues of predictability- does it matter, for example, what the mouse model looks like if the ability to predict what anti-depression, anti-anxiety or anti-mania drugs will work (ultimately) in the clinic is high? Of course not…if your goal is drug development.

If you goal is understanding the neurobiological underpinnings of the human disease, well, you want to be careful.

My take on the current state of the nonhuman models of depression is that we are not yet at the point where we have high confidence in calling any model “depression”. They are models, they have various points of high correspondence to human illness but they also have limitations. They are, in cases, highly predictive of drugs which will ultimately prove useful in the clinical setting.

So I confess that when I see a scientist (Noah Gray is, after all, a scientist by training even if he currently inhabits an Editor job) tweet “strong evidence”, well, I’m looking for some coolio stuff.

Following the link to the post on Functional Neurogenesis, I found the post title to include the word “confirmed”! yay, let’s read!!!!

So I’m excited to say that we will soon be publishing what (I think) is the best evidence that impaired adult neurogenesis actually causes depressive symptoms (in mice).

“in mice”. Fuck.

Okay, so let’s scratch the “strong evidence” and the “neurogenesis-depression hypothesis confirmed stuff” for now.

Pretty cool paper, by the sounds of it. Again, from my distinctly nonspecialist position, this is the next step. Manipulations of the neurogenesis processes as independent variables to provide stronger evidence that there is a causal relationship. Between these processes and a behavioral or physiological phenotype. I can’t really say where this all fits into the “first demonstration” or “best demonstration” or “critical demonstration” picture so as to give you a valance for exactly how cool it is. But I do recognize that these approaches are the next place the field needs to go to better establish that the neurogenesis-depression hypothesis could be “confirmed”.

Me, until the state of animal models are better and more convincingly established, I want to see data in human subjects before I am willing to concede either “confirmation” or “strong evidence”.

__
ps. Do take a read over the Functional Neurogenesis blog. It is really quite good and this area of neuroscience is fun. Many of you may still labor under the old belief that the adult brain doesn’t grow new neurons and can’t repair itself. It’s no liver, but the brain does have some capacity to generate new neurons.

We occasionally lose track of this fact. Our stance towards the reviewers of our manuscripts can be fairly antagonistic. After all, we wouldn’t have submitted the dang thing in the first place if we didn’t think it was ready for publication as-is, right?

It doesn’t help that one of the manuscripts we have out right now has drawn reviewer fire over some of the more maddening reasons. Basically a difference of opinion on interpretation, background and context- my view (even apart from my own manuscripts, thank you!) is that if the data are sound, well analyzed and placed in a context that is supported it is not my place to hold up publication because my interpretation differs. So these kind of “discussions” during peer review don’t really please me.

Another paper we have in the submission process is another matter. I have a little less than usual confidence that we know what is what when it comes to our findings. I am really keen on seeing what reviewers have to say about it. I am looking forward to what I think is the start of a pretty cool discussion. Hopefully in the sense of additional data, models and papers resulting because I have a sense this little subarea is about to take off. Not that we’re going to be the spark, mind you. Just that we’re getting into some stuff that a bunch of other usual-suspect labs can do and have all the same reasons that we do to delve into the questions. There are, however, a whole lot of ways to get into the question and model the behavior.

We took one approach and I am pretty interested in what the reviewers are going to think. Will they buy that our kewl effect is actually interesting? Will they come up with a whole ‘nother context in which it should be framed or interpreted or do they sign up for our view on the phenomenon?

Can’t wait for the reviews to come in on this one…..

I’ve been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.
Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.
I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.

A retracted paper meansonline survey

Direct link to the poll in case you can’t see it.
My position can be found after the jump….

Read the rest of this entry »

crossposting from Scienceblogs.

I’ve been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.

Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.

I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.

 

Direct link to the poll in case you can’t see it.

My position can be found after the jump…. Read the rest of this entry »