The many benefits of the LPU
June 29, 2016
A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified
…a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.
Sarewitz ends with an exhortation of sorts. To publish much less.
Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.
Interestingly, this “Price” he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes “an elitist”.
Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.
Price was worried about “distinguished”, but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the “merely good” scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol’ Steve McKnight’s complaints about the riff-raff to it.
Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.
In today’s competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.
Investing more effort in fewer but ‘more complete’ publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (13439–13446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics. Proc. Natl Acad. Sci. USA 112,
One Richard Ebright commented thusly:
Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of “minimum publishable units.”
This is as wrong as wrong can be.
LPU: A Case Study
Let’s take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as “This new street drug is 10,000 times more potent than morphine” [WaPo version].
Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.
In the delusional world of the “complete story” tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.
That is a whole lot of time, effort and money.
In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others’ work in whole or in part.
Choices or assumptions made that lead to blind alleys will waste everyone’s time equally.
Did I mention the funding yet?
Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:
1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]
2) There isn’t anything new here. This is just a potent synthetic opiate, right? That’s what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don’t necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]
3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn’t it be useful to know what they have found? Why IS it that W-18 is active in writhing…or can’t this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn’t have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]
Etcetera.
Science is iterative and collaborative.
It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.
Waiting while several groups pursue a supposed “complete story” in parallel only for one to “win” and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.
June 29, 2016 at 4:28 pm
When you have ~19 people in your lab (like Richard Ebright does) you can afford to push “complete stories”. Not so for the rest of us riff-raff.
LikeLike
June 29, 2016 at 5:15 pm
Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of “minimum publishable units
There is so much wrong with this statement. But, basically, there is no such thing as a complete story in research. It’s never complete. Never.
LikeLike
June 29, 2016 at 5:55 pm
When you have ~19 people in your lab (like Richard Ebright does) you can afford to push “complete stories”.
It isn’t that you can afford to do so. It is that you benefit by thinning the herd that can meaningfully compete against you. It’s an admission of the strategy. When you can’t necessarily gain by being the first to genuine fundamental insight you can reserve the credit for yourself by preventing others who are less equipped in staff and resources from making a first step.
LikeLike
June 29, 2016 at 5:59 pm
I run a lab studying risk factors for autism spectrum disorder. Some papers, notably the ones most easily mis-interpreted or mis-represented, I hold back to be very sure I am not publishing false positives that could be damaging to public health (the opposite of the purported mandate of my NIH funding). It has hurt me on more than one study section for a period of low productivity. In a case-by-case manner, LPU can still be good science ( phenomenological reports with good validation and investigation), but in some cases it is problematic (when false positive chance and impact is high). There is a nuance to this that all of the editorials and quantitative publication/impact metrics I have seen have missed to date.
LikeLike
June 29, 2016 at 6:18 pm
As the comedian Dara Ó Briain once said, “Science knows it doesn’t know everything; otherwise, it’d stop”. The people who believe in “complete stories” don’t get this. Science can never *be* complete.
LikeLike
June 29, 2016 at 6:34 pm
There is also the communication aspect of it all.
I personally absolutely dread reading so many CNS papers where there are 10 different things they looked at, to each of them they devoted a couple paragraphs, it never becomes clear what exactly and how they did, sometimes even why they looked at those things to begin with, etc.. They end result is disjointed data dump papers with no clear message, that you are forced to read to “keep up”, but you learn very little from and you have to seriously strain yourself to find the important nuggets of information.
On the other hand I have read many LPUs where the approach taken was “Here is question X, and here is the answer to it, and here are some details about how we arrived at it”, which might have been one of those 10 disjointed strands in a CNS paper, and those papers have been much more satisfactory reads. They also leave a better imprint in one’s mind because you associate the answer to that specific question with a paper and its authors, which is better than something you figured out by reading between the lines in a Science paper.
LikeLike
June 29, 2016 at 6:37 pm
Isn’t it all in the supplement, GM?
LikeLike
June 29, 2016 at 8:11 pm
There is also the perspective issue. What you consider a LPU, I might consider a major work to be proud of. Field dependent. Indeed: sub sub field dependent.
And I get to be proud of my tiny little papers. I’m not ashamed of being small time.
LikeLike
June 29, 2016 at 8:17 pm
Dude, you gotta try to write more concisely. Who can even read through this fucken bloviation?
LikeLike
June 29, 2016 at 10:00 pm
In a case-by-case manner, LPU can still be good science ( phenomenological reports with good validation and investigation), but in some cases it is problematic (when false positive chance and impact is high)
You are confounding whether the *data* are strong enough to merit publication, with whether or not that publication is an LPU. You can have very strong, reliable data in an LPU, it just might be only 1-2 experiments rather than the 4-6 of a bigger paper. Conversely you can have flashy kale-causes-autism data in a major publication, but it’s still crapola.
DM isn’t saying we should all publish crap quality, he’s saying we should be willing to publish in smaller packets.
ps) everything in the autism field is wrong anyhow, so regardless of what you publish, your data are weak. kale causes autism. it’s very clear. look at california.
LikeLike
June 29, 2016 at 10:05 pm
You’re making alot of sense lately DM. Is everything OK?
LikeLike
June 29, 2016 at 10:12 pm
@CPP – Dude, you gotta *write*. Your blog at freethoughtblogs looks dead.
Yeah…DM could have stopped at “Choices or assumptions made that lead to blind alleys will waste everyone’s time equally.” instead of using the ruse of LPU to discuss “W-18”. I am in agreement with DM’s thoughts on LPU though.
LikeLike
June 29, 2016 at 10:40 pm
DM isn’t saying we should all publish crap quality, he’s saying we should be willing to publish in smaller packets
The “P” in LPU stands for Publishable people. Do we have to keep reminding?
LikeLike
June 29, 2016 at 10:42 pm
Yeah…DM could have stopped at “Choices or assumptions made that lead to blind alleys will waste everyone’s time equally.” instead of using the ruse of LPU to discuss “W-18”.
This is, in theory, a science blog as well as a science-careers blog. Once in awhile I feel like saying something related to drugs of abuse (or suspected ones).
LikeLike
June 29, 2016 at 10:48 pm
but in some cases it is problematic (when false positive chance and impact is high)
I can’t be bothered with the media sensation part of “impact” but if one is worried about false-positives isn’t it better if other labs can start coming at it with their various approaches? To provide both direct and indirect replications and possible extensions? The supposed problem with false positives as it relates to concerns about reproducibility is greatly reduced when we all understand that confidence is not built by a single lab’s lone “complete story” paper. It is built when we all engage in confirm-and-extend science.
LikeLike
June 29, 2016 at 11:13 pm
I agree and this piece is, for me, your most convincing to date on how society journals are out of step w filling this gap
LikeLike
June 29, 2016 at 11:28 pm
There are so many, many journals that I am not really criticizing publishing or the stance of any given journal here. It is quite obviously possible for anyone to pursue a LPU strategy now and again if they choose.
What I am trying to do is push back against people like Ebright who apparently want all of science to wait until some huge lab is ready to release their CNS offering to the riff raff. And for nobody to work on a given topic unless they can do it all, covering huge swaths of possible experimental territory all in one lab.
It is a selfish agenda. It prioritizes own-glory for a discovery or advance over the discovery or advance itself. This is totally out of step with what I happen to think should be the collaborative enterprise of building knowledge.
LikeLike
June 30, 2016 at 2:13 am
“All publications, by all researchers, at all career stages, should be complete stories.” is horseshit. No story is ever “complete” or we’d all be out of a job.
LikeLike
June 30, 2016 at 4:53 am
When I was younger and a naive PhD student I also bought into the whole “complete story” scheme.
But more and more I come to the conclusion that it is in fact often very detrimental to science.
Don’t get me wrong, I am still impressed when I see a nice examples of a complete story, where something is fleshed out from different angles, going from phenotype to mechanism or whatever. But in general, besides the fact that the complete story racket really benefits those with the resources (and students/PDs) to burn, I think it has led to a lot of very crappy science to be published.
This comes not only from the fact that not many labs are good at many different things and that the more you include, the less likely it is that everything can be properly judged by the reviewers, but the fact that the complete story BS is more and more required to publish in the glam mags. So imagine a biochemistry lab with some cool biochemical findings, but they know that they need genomics and in vivo or whatever to get it published really high. In this situation I believe that chances are pretty good that crappy science is about to happen, because first it is not their core expertise and, more importantly I believe, this additional data “has” to fit nicely to the rest, and that is always a very dangerous thing.
We recently discussed a “Nature Genetics” paper from a neurobiology big shot. The behavioral stuff we could not judge, but the work on chromatin and RNA expression was atrocious and would never have flown in a specialized paper with appropriate reviewers.
Looking at the titels of other papers of the lab, I expect many of them to be similar in nature.
Nowadays, the papers I appreciate the most are often those where a group looked at one thing very thoroughly within their core competences and where I can be pretty sure that the findings are real and meaningful.
LikeLike
June 30, 2016 at 5:11 am
I think there is a difference between publishing in LPU (as DM says keyword “publishable”) and slicing the salami.
As GM said upthread, a good LPU strategy is “Here is question X, and here is the answer to it, and here are some details about how we arrived at it”. It becomes salami slicing when the authors use 3 complementary techniques to arrive at the answer, and then publish this as 3 separate tiny papers/communications, all published in the same time frame (and cross-referencing each other). Worse when only 1 of the communications makes it through review because the others aren’t really publishable units and have to wait for the followup.
It wastes everyone’s time and money to hide an LPU until it is a “complete story” (whatever that means to the author). Far better to publish an experimental thread that tells a piece of the story and let other build on it to “complete” it.
LikeLike
June 30, 2016 at 8:43 am
We also need to remember that (as GM said) science papers are about communication. That means we need to understand the psychology of the audience. Science audiences today can only glean one remembered result from a a paper (*). People tend to file papers mentally under a title – “that’s the paper that found X”. This means that if you have two major results you want the world to know in a paper, one of them will get forgotten (and not cited!).
* I don’t know if people gleaned multiple results from papers in the past. This single-result problem has been true as long as I’ve been in science (30 yrs).
As an interesting corollary, this opens a question of what the one result is from one of those monster many-experiment CNS papers. Sometimes it’s an overgeneralized conclusion “drug Q causes cancer” [which sounds complete but isn’t because it really is “drug Q caused cancer in these mice with this weird genetic background and it wasn’t due to a dozen expensive controls”], but I think more often the one result is “we’re a really cool lab because we can do these dozen expensive and complicated experiments”.
PS. We should not forget that LPU was the whole point of the invention of the scientific journal. Science communication before that was done with large monographs that took a decade to write and could only be published once there was a “complete story” (think Galileo’s Dialogo or Newton’s Principia). Science moves forward faster now because each science paper can build on the “incomplete” stories from each other.
PPS. Why isn’t it a “complete story” that when you run experiment A under conditions B you see result C?
LikeLike
June 30, 2016 at 9:15 am
“This means that if you have two major results you want the world to know in a paper, one of them will get forgotten (and not cited!).”
Sure. But if you publish them both in one Nature paper, at least one of the results will get remembered. If you publish them separately in the Acta Bunnihoppia Scandinavica, then no one will see either of them and neither will get cited.
Or so the glam logic goes.
LikeLike
June 30, 2016 at 9:43 am
“If you publish them separately in the Acta Bunnihoppia Scandinavica, then no one will see either of them and neither will get cited.”
So, there’s Nature and then Acta Bunnihoppia Scandinavica? Nothing in between those poles?
If your study cannot get accepted in even the minimum respectable journal (MRJ) for your field/subfield, then it presents an entirely different quandary and one needs to rethink their experimental strategy/results and assess whether they should even publish that stuff without bringing it to at least MRJ level.
LikeLike
June 30, 2016 at 10:24 am
Some of the least complete stories I’ve ever read are published in Glam Journals.
Remember, there are three ways to get your work in top journals:
1) put together a complete, well-controlled story that is of broad interest
2) put together a paper on a truly novel discovery
3) publish an optogenetic or Crispr story (or insert recent HOT topic)
LikeLike
June 30, 2016 at 10:50 am
4) put together a complete, badly-controlled story that may or may not be of broad interest, have BSD as last author
LikeLike
June 30, 2016 at 10:51 am
4) Have the cachet and GlamourName to convince the editors that they should publish your paper, even when the reviewers say they shouldn’t.
LikeLike
June 30, 2016 at 11:19 am
people. stop saying optogenetics is sufficient to get in to glam. patently false.
LikeLike
June 30, 2016 at 11:24 am
5) Discover a new GigantoTerrorTeeth dinosaur fossil.
(Now *those* papers truly have general appeal.)
LikeLike
June 30, 2016 at 11:27 am
tom- you realize bagging on opto is just using a relatively recent shorthand to assert that Glam papers are often driven by technology (several of them, usually) that doesn’t really lend new fundamental insight, yes?
LikeLike
June 30, 2016 at 11:42 am
qaz June 30, 2016 at 8:43 am
PS. We should not forget that LPU was the whole point of the invention of the scientific journal. Science communication before that was done with large monographs that took a decade to write and could only be published once there was a “complete story” (think Galileo’s Dialogo or Newton’s Principia). Science moves forward faster now because each science paper can build on the “incomplete” stories from each other.
We should also not forget that the scientific journal was invented at a time when the scientists were mostly various aristocrats who never had to worry about where their paycheck would be coming from 6 months from now.
It’s a bit different now 😦
LikeLike
June 30, 2016 at 11:45 am
Alfred Wallace June 30, 2016 at 4:53 am
So imagine a biochemistry lab with some cool biochemical findings, but they know that they need genomics and in vivo or whatever to get it published really high. In this situation I believe that chances are pretty good that crappy science is about to happen, because first it is not their core expertise and, more importantly I believe, this additional data “has” to fit nicely to the rest, and that is always a very dangerous thing.
We recently discussed a “Nature Genetics” paper from a neurobiology big shot. The behavioral stuff we could not judge, but the work on chromatin and RNA expression was atrocious and would never have flown in a specialized paper with appropriate reviewers.
Looking at the titels of other papers of the lab, I expect many of them to be similar in nature.
I’m a genomics person and I can confirm this (and not just anecdotally, large-scale studies have been done that seem to corroborate it) — usually the worst-quality genomics experiments can be found in the last couple of figures in CNS papers that otherwise have nothing to do with genomics but the experiments were done (in haste and probably by people who have no business doing those experiments) in order to get the paper published in the CNS journal.
LikeLike
June 30, 2016 at 11:46 am
drugmonkey June 29, 2016 at 6:37 pm
Isn’t it all in the supplement, GM?
Unfortunately, it often isn’t.
P.S. I recently had to return a paper for revisions three times in part because the authors basically refused to include those details in the supplement…
LikeLike
June 30, 2016 at 12:00 pm
I have little sympathy for PIs worried about being drowned in literature. There are good ways to optimize your literature searches. Learn them. Your whole job is knowing what is interesting and important. If you have the proper aptitude and insight, as all these elitist scientists claim they and their ilk alone possess, then you shouldn’t have a problem.
LPUs stories are great in theory, but many get bloated in the review process.
LikeLike
June 30, 2016 at 12:02 pm
Here’s the thing: a *quality* LPU is not necessarily at greater risk of being a false positive. The best papers are ones that test a very clear hypothesis and that can be done in a small or a large report, as long as the hypothesis is correctly defined. I’m perfectly OK with someone going halfway; do they need to demonstrate it in 10 model systems using 10 different cutting edge assays? No. The reason a LPU could be a false positive is because someone claims more than what the data supports.
@dr24: “And I get to be proud of my tiny little papers. I’m not ashamed of being small time.”
Abso-f*ing-lutely. I have zero interest in dramatically changing my field via flashy, paradigm-shifting behemoth reports. Those paradigm shifts are sometimes important (they are also very frequently WRONG), but so is progressing knowledge in incremental steps. I got into science to deepen scientific understanding, and I value the high quality LPU.
As an aside, this has been the worst realization of being a faculty member… that progressing/refining knowledge is not going to get me grants. My field wants big, new, sparkly. I’m tired of it.
LikeLike
June 30, 2016 at 12:08 pm
” the worst-quality genomics experiments can be found in the last couple of figures in CNS papers ”
I would love to see an analysis of data fabrication broken down by what figure number they were part of. That’s another danger of the complete story nonsense. It creates a huge incentive to find adefinitive ending, which is rarely a true reflection of biological reality.
LikeLike
June 30, 2016 at 1:18 pm
I wasn’t really talking about deliberate data fabrication. just badly executed experiments.
LikeLike
June 30, 2016 at 1:44 pm
“N-up until it works”!
LikeLike
June 30, 2016 at 2:05 pm
The LPU is now a single figure: https://www.sciencematters.io
LikeLike
June 30, 2016 at 2:31 pm
@Luminous: “So, there’s Nature and then Acta Bunnihoppia Scandinavica? Nothing in between those poles?
If your study cannot get accepted in even the minimum respectable journal (MRJ) for your field/subfield, …”
The MRJs for my field will not accept what I consider to be a LPU. The cute little journals way down the list (sometimes beginning with “Acta…”) will.
LikeLike
June 30, 2016 at 2:38 pm
” this has been the worst realization of being a faculty member… that progressing/refining knowledge is not going to get me grants. My field wants big, new, sparkly. I’m tired of it.”
As much as I sympathize with this, as an occasional study section member, I’d say that what grant reviewers in your field probably want is innovation. Innovation does not necessarily mean using flashy new techniques (although that’s one way to do it). It also means coming up with a clever experimental design, or even combining two techniques that no one has thought to put together before, or applying methodology developed in one field to beneficial end in another. You can get grant reviewers to say “huh, that’s cool” without dazzling them with OptoCrisprDreadds in every experiment. In fact, as a reviewer I sometime experience a distinct sense of fatigue when every grant I read has the same hot new method, and I appreciate it when I come across a set of experiments that reflects intellect and not sizzle.
LikeLike
June 30, 2016 at 3:54 pm
5) Discover a new GigantoTerrorTeeth dinosaur fossil.
(Now *those* papers truly have general appeal.)
Not so much funding appeal, though. I know some folks at the Smithsonian & other museums and they are always having problems getting funding.
LikeLike
June 30, 2016 at 4:25 pm
@AnonNeuro
Matters is definitely getting some legitimately good data, and I’m pretty glad that it is working out. I think if it were coupled to a way to seamlessly string together a couple such mini-papers, to address the cohesiveness issue, it could be a much more effective way to publish data than what we currently have.
LikeLike
June 30, 2016 at 4:28 pm
Well, I recently sat on study section for the first time, and it confirmed my worst fears with respect to this elusive idea of innovation. Well designed but cautious/incremental studies did not fare as well as ones that just did something totally out of the box. There was even one grant that the SS came to consensus after vigorous debate on the meaning of innovation. There was agreement that the grant could resolve a fundamental conflict on the field. But it just wasn’t *interesting* enough to fund. They were absolutely equating innovation with newness, which I totally disagree with. It wasn’t even that stuff had to be cutting edge (OptoCrisprDreadds, as you put it). It was that they didn’t seem to care about deepening existing knowledge and instead wanted it all to be all new and shiny*.
*Like, in the sense of… we’ve made aluminum pans! We’ve made titanium pans! We’ve made steel pans! We’ve even combined two of those before, and see how much better it was? Let’s put all three together and see what happens!!!
I don’t think the top grants were bad. They were good, interesting studies. It just made me so sad to see high quality, deliberate, elegantly designed research take second stage.
LikeLike
June 30, 2016 at 4:29 pm
(presumably this is SS specific. I commented in an earlier post that I was having trouble with perceived innovation in this SS and have therefore moved my aims to other groups that I hope will be more receptive to what I think the value is)
LikeLike
June 30, 2016 at 4:48 pm
@jmz4
“I think if it were coupled to a way to seamlessly string together a couple such mini-papers, to address the cohesiveness issue, it could be a much more effective way to publish data than what we currently have.”
Completely agree. That’s why I like the “Research Advances” option for eLife, where labs can add a short follow-up to a previous article. Seems very efficient.
LikeLike
June 30, 2016 at 5:24 pm
“We recently discussed a “Nature Genetics” paper from a neurobiology big shot. The behavioral stuff we could not judge, but the work on chromatin and RNA expression was atrocious and would never have flown in a specialized paper with appropriate reviewers.”
And I’ve noticed the same about the behavior.
The other trend I notice is reviewers (of both papers and grants) getting excited about the studies auxiliary to their own field (i.e. neurophysiologists on the genetics, or fMRI, or behavior; geneticists/molecular biologists on the behavior, . . . ) while trashing the work in their own field. This resulted in the counterproductive result of papers being accepted/grants being funded by those without the expertise to judge the quality of the work actually being done in the “core competency” of the lab.
Now, I blame the in-field reviewers for the result, too, because I used the word “trashed” and not just critically reviewed. In many cases, it seemed like reviewers were judging the in-field work harshly, more harshly than they would review their own work.
LikeLike
July 1, 2016 at 9:05 am
@Assistant Prof
“It just made me so sad to see high quality, deliberate, elegantly designed research take second stage.”
Which is the #2 reason why being on study section absolutely SUCKS. #1 being that is takes tons of time, which could be better spent… writing grants.
LikeLike
July 1, 2016 at 10:51 pm
I’m surprised so manu ppl jumping on the LPU bandwagon on this thread.
It takes a lot of time to write up results and if you don’t expect too many ppl to read or care about your paper (pretty much the definition of LPU), there is a cost there.
And if you get caught up on publication volume, you risk asking too many small peripheral “low hanging fruit” questions without ever attacking the really hard but important stuff.
I’m not talking about flashy nature stuff, just saying long, thorough studies are important too. I did my PhD in a lab that sought publication volume over quality so I’m definitely sensitive to the fact t that there is a middle ground.
LikeLike
July 2, 2016 at 10:25 am
That is not the definition of LPU.
LikeLike
July 2, 2016 at 5:32 pm
Some of my LPUs are very highly cited. It is about making a good, well supported story as soon as you have enough data, not about publishing crap.
LikeLike
July 3, 2016 at 5:08 pm
It is also about judging the relative need for a given piece of information to be published. IMO.
LikeLike