The NIH must dismantle the corrosive competitive culture of science

January 29, 2013

I like competition, don’t get me wrong. I engaged in inter-school competitive sports from freshman year of high school through my senior year of college. I played intramural sports from late high school through the end of graduate school. I’ve done competitive sports outside of school organizations from high school until…yesterday. Essentially uninterrupted.

Nowadays, I spend a solid plurality of my weekends schlepping one kid or another around to a competitive sporting event.

roids_mcgwire.jpg
Just milk? source
I love what competition does for us on many levels, of course. This should be obvious from the above. Of the many benefits, one thing competition does that is most useful is to make us strive to be better. It makes us practice to improve our play and our game. It makes us get fitter, more accomplished, more capable. It makes us attain performance levels we didn’t know we could reach.

This is true in science as well.

Science is indeed a competitive business, as most of my Readers know full well.

We positively reify the markers of success- getting a particular scientific discovery first. Accomplishing some demonstration or discovery with the greatest panache. Coming to a realization or theory that changes the way everyone else thinks about a topic. Creating a medical therapeutic approach…or the basis for such a thing. The accolades are both arbitrary (prizes, “respect”) and specific (grant funding, jobs of increasing worth, etc). At heart, scientists are trying to learn things about the function of the natural world and so there is an overlying competition to advance knowledge.

In all of this, the pot is sweetened by the competition. The scientist receives part of her respect not merely for accomplishing a certain task but for doing it before, or better than, the next scientist.

AP photo from here

This reality can be fantastic for science. As in sport, the competition makes us work harder, make us work to up our game and motivates our excellence. This speeds the advance of knowledge. One of my favorite formative anecdotes was the late 80s-mid 90s competition between several laboratories to comprehensively identify the role that various medial temporal lobe structures (e.g., the hippocampus) played in memory function. In this case the competition was made more acute (as it often is in science) by disagreement. One lab thought structure X really did Y and another insisted it did Z. Or Y’, perhaps. And every year the Society for Neuroscience annual meeting would have a hilarious slide session in which the labs would bash away at each other. Almost always…pointedly. Sometimes in semi-personal attacks. Then they would scurry back to their labs, publish a paper or three and come back next year ready for more battle with their latest results. Understanding was advanced.

This is where we depart from competitive sports.

The key feature in my anecdote is that the labs would publish. Most if not all of their results. And they would discuss their latest findings at meetings. With. their. competitors. Knowledge was built not just by the major players but by anyone else who cared to chip in as well. Because there was a superseding goal that went beyond the simple question of who crossed a line first or who scored the most points.

That goal was the provision of knowledge to everyone. Because scientific advance requires a collaboration amongst many. This is why we publish papers that include full methodological description. This is why we are expected to be honest about how we did a particular study. This is why we are expected to share the very intellectual property that was necessary for the experiments!

People seemed to understand this, and acted accordingly, during the medial temporal lobe memory warz.

The trouble comes when we start behaving a little too much like sports competition.

Before I get into it, another analogy. Take business. It used to be that competition was about money, yes, but also about providing a service or building a widget. Making something that people wanted and needed. The marker of success was not just driving your competitors out of business…but in being the best to provide rail service from New York to San Francisco. To supply an automobile that people could afford….and that worked. At some point, business became more about scoring the most points or crossing the tape first. The role of arbitrary performance indicators (unimaginable sums of money, unconnected to anything that could be viewed as necessary for the participants) in motivating behavior totally supplanted real indicators. And in many cases the product or service suffered tremendous harm. As did the consumer.

We have reached this point of transition in science. The marker of success is the mere fact of publication* of a paper in Journals of established, but arbitrary, rank. It is no longer about the actual finding or any sense of advancing science or knowledge. Papers are increasingly disconnected from each other and from anything that is of any reasonable importance to know.

So why should the NIH care? No, I don’t mean for the last point here. Yes, the relevance of work funded by the National Institutes of Health does concern me. However, the appropriate valuation across the scales of “basic” to “applied” research are not the topic of today.

The topic of today is the efficiency with which the science that the NIH pays for is advanced.

Sadly, we are in a time of great secrecy within science. Because being first** to some finding is rewarded above and beyond all other things, the very essence of the competition demands not letting anyone else know what you are doing until it is published. The typical manuscript in our most respected journals requires many person-years of work. And much of this work never sees the light of day for various reasons. It is negative. Merely supportive. A blind alley. Or perhaps just of insufficiently amazing interest.

More sordidly, much of this work never sees the light of day because it might help a competitor lab to beat us next time.

This is being done on the NIH dime. Right now, in labs all across the US. Many, many hours and $$$ of work being conducted that will never see the light of day (i.e., be published).

Admittedly there is a lot of work that nobody wants to see. I get this. I am no fan of Open Notebook Science. I want scientists to present their work to me somewhat triaged for interest. But we are well down the road from that level at present.

The “cost” to the NIH is not merely the invisibility of data and findings that they have already paid for. It is also in the future expenditures as another laboratory has to repeat the same experiments, generate the same blind alleys, waste the same time evaluating bad reagents or theories.

Sadly, some labs even lay a false trail by describing their Methods so incompletely that other labs get a wrong impression of what needs to be done.

This can burn years of a trainees time in a lab. No joke and no exaggeration.

And we haven’t even arrived at the discussion of fraud which is also driven by the arbitrary markers of competition.

Time for the NIH to get interested in the way that competition for arbitrary markers in science is wasting their precious taxpayer dollars. Long past time. I’m thinking of writing Sen Grassley’s committee myself! (kidding.)

Solutions? Well, we’re faced in part with a Justice Potter Stewart solution in that we can identify wasteful, GlamourPublication chasing laboratory operations when we see them. We can also take a stab at estimating how many person hours of work are surely being buried in the process but this will start to get a little…forensic. But if it were easy…..

I’m going to suggest going after Glamour idiocy at two places. Empower the Program Officers to demand a better ratio of work payed for to publications resulting, first. Second, the study section. Yep, beef up the analysis of “productivity” by creating a set of bullet point guidelines for how to asses. They have them for the other aspects of grant review, right? The Significance, Innovation, etc criteria? Well, no problem beefing up the assessment of Productivity.

Heck, this should be a formal criterion on all grant review, not just continuation proposals. It dovetails nicely with Michael Eisen’s proposals for lab-based or person-based funding, doesn’t it? How many people have you had working in your lab and how many figures have been published? What is your total lab support, including fellowships, TAships, etc for your trainees? Have you published as much of this work as you possibly can?

Or are you engaging in competition for arbitrary markers and are relegating much of the work to the dark corners of forgotten harddrives?

Additional Reading on Fixing the NIH:
Shut off the PhD tap
We are going to fix the NIH

__
*If you ever catch yourself saying “my Cell paper”, “the Jones lab’s Nature paper” or “her Science paper” in preference or addition to a short description of the topic of the paper….you are part of the problem. And you need to step back from the brink of GlamourDouchery before you fall in for good.

**Two labs could have the essentially same idea about solving a given problem, say the function of a gene. They could beaver away with 5-20 people contributing various science over the course of years. With many millions of dollars of NIH funds expended. If they happen to wrap up their “stories” a mere two months apart, this can be the difference between being accepted into Science or Nature or not. It may even be the case that the second one to be ready is a better demonstration on all features and yet the priority, the mere fact of submitting it for consideration first, rules the day. This is profoundly disturbed.

What is even more disturbed in the system is what happens next. Many aspects of the paper which has been beaten to the punch may not be published at all! That’s right. For the type of lab that is competing on the “get”, i.e., the mere fact of a Nature or Science acceptance, it is “back to work, minions!” time. Time to take the “story” beyond the current state of affairs and hope to win the priority battle for the next story which is big enough for Science or Nature to take it. At the very least replication is lost. More likely, there are a number of differences between the two studies, differences that maybe were of interest to other laboratories. Of interest for different reasons to the same laboratories. Or may later come into focus ten years later because of additional findings. Yet because of the competitive conundrum of science, many of those findings will be lost forever.

ETA: Forgot my disclaimer. I have, in many ways, tried to run to daylight in my scientific choices. This is in part due to what is an intrinsic orientation of mine, in part due to accidents of training history and in part due to explicit decision-making vis a vis the career on my part. I avoid competitive nonsense. I am not in the Glamour Chase. I am not entirely certain whether or not various steps to dismantle the bad effects of Glamour chasing, scooping, priority focused science would be good or bad for me, to be honest.

No Responses Yet to “The NIH must dismantle the corrosive competitive culture of science”

  1. jipkin Says:

    “This is being done on the NIH dime. Right now, in labs all across the US. Many, many hours and $$$ of work being conducted that will never see the light of day (i.e., be published).

    Admittedly there is a lot of work that nobody wants to see. I get this. I am no fan of Open Notebook Science. I want scientists to present their work to me somewhat triaged for interest.”

    this reminds me of a half-brained idea I have for moving the NIH to eventually offering grants that come with strings attached. These strings are that the recipient must publish a single-figure paper on an NIH website with explanation every 2-3 months while being paid by the grant (to make it easy the figure wouldn’t have to be related to what the grant funded). Not only would this reduce the reliance of papers on The Story, but it would be an outlet for all the loose-end work that never gets published. I go into some details here: http://empiricalplanet.blogspot.com/2013/01/idea-friday-changing-scientific.html

    One can also imagine using this an outlet for getting out in front of competitors – staking claim to Firstiness with a key figure. But that also means they have to share those results – aka good competition.

    Like


  2. Very interesting idea, but how do you think publishing more is going to lead to less competition?

    Like


  3. It is no longer about the actual finding or any sense of advancing science or knowledge. Papers are increasingly disconnected from each other and from anything that is of any reasonable importance to know.

    Nice sweeping claim that you just completely pulled out of your asse without even anecdotal support, let alone any meaningful evidence. Maybe you had weird dreams or something last night, because this entire post is one long paranoid loony parade of horribles that is unmoored from reality.

    Like

  4. jipkin Says:

    well I don’t think it would necessarily – I’m with DM in that the issue is good competition vs bad competition, not alot of competition vs a little competition. By producing short figure-length pubs, the knowledge gets shared sooner than it would if both labs run the race in their own world.

    Like

  5. Jonathan Says:

    There’s a finite pot of money for publicly funded science, and more scientists want money than can be funded. How do you prevent a competitive system for allocating those resources?

    Like

  6. namnezia Says:

    Dude, I’m not sure what planet you’ve been in but this type of competition and paranoia not to publish certain results has been around as long as I’ve been in science (which is close to but admittedly not as long as you have been). It is very field dependent and lab dependent and nothing that has changed in recent years as far as I can tell. And who says the results don’t matter? If a lab publishes a crap paper in Science everyone will recognize it for what it is, a crap paper, even if it was in Science.

    Like

  7. Jonathan Says:

    Yeah, that sort of shit went on in the 1990s when I was in British universities, it’s hardly unique to NIH-funded professors.

    Like

  8. drugmonkey Says:

    how do you think publishing more is going to lead to less competition?
    How do you prevent a competitive system for allocating those resources?

    The problem is not competition per se. I provided an anecdote for competition that I feel was positive in science. It is the thing for which we are competing and the way in which we go about it that is at issue. Arbitrary markers of success are the major problem.

    this entire post is one long paranoid loony parade of horribles that is unmoored from reality.

    uh-huh. Sure it is PP, sure it is. You just persist in your Stockholm syndrome denial homes. I’m sure that will work out great for you because admittedly there is little hope for change any time soon. Doesn’t make my analysis incorrect thought.

    this type of competition and paranoia not to publish certain results has been around as long as I’ve been in science

    Maybe, maybe not. Perhaps it is encompassing more of the territory, perhaps with more severe consequences. Whatever. Throwing up your hands and saying “well what can we do” is bullshit.

    If a lab publishes a crap paper in Science everyone will recognize it for what it is, a crap paper, even if it was in Science.

    Riiiight. Dude, we have fraudulent papers in Science not being recognized and identified until long after the fraudster has secure an Assistant Professorship and one or two NIH grants. And that’s what we’re focused on this week. NIH grants. Jobs within the NIH-funded extramural work force. Stuff that matters and is demonstrably affected by things that we all have tolerated up to this point as “just the way it is”. [Comment edited in line with my aspiration at the outset of this series- DM]

    Like

  9. AcademicLurker Says:

    Clearly what we need to do is require fraudsters to make a confessional appearance on Oprah before they’re allowed to apply for NIH funding again.

    Like

  10. Lady Day Says:

    I actually like seeing two sets of studies published testing the same question with similar results. Makes me feel more confident about reproducibility. I hope that people don’t hold back on publication simply because they essentially duplicate someone else’s work.

    On another note, it bothers me, too, that some people hold back on discussing unpublished findings in their labs (at least in my particular area of bioscience) simply because they are afraid of being scooped. I’ve even witnessed this behavior between individuals *in the same lab*: a postdoc who refused to let anyone else use her novel knockout mouse model, including the grad student that she was supposed to train on another project (testing another, completely different question) that just happened to utilize that line of mice, because she was afraid that the grad student would scoop her. End result: postdoc never published on the mouse model (even after 7 years in the lab) and grad student left the program early with a Masters and is now a teacher in a public high school because she couldn’t get anywhere with her project – no mice. The PI didn’t care at first – he’s a BSD in his early 70’s, and it was just 2 people out of 20 in his lab at the time who didn’t get papers. However, after 2 grad students in his lab left early, his department put some pressure on him.

    Like

  11. Industry Scientist Says:

    Great post. And I think the key word here is efficiency. If there’s a limited pot of funds going around, there is no room for waste. And the drive to publish certainly drives a good deal of that inefficiency. One of the things which struck me about industry is how largely efficient it was compared to academia, especially in regards to time.

    No grant or paper writing (or, at least, no pressure to publish) means more time for designing experiments, tabulating data and being in the lab. Likewise, while I have interns and the occasional summer student to tutor, they’re by-and-large self-sufficient and competent; if they’re not, they get let go (I’ve had one mediocre intern who was released halfway through his internship because he simply wasn’t performing adequately). In other words, any students I have are a help rather than a hindrance in the lab because incompetence is not tolerated, so my time is not wasted.

    Part of me wonders if the academic system can be made more efficient if some of the time-sucking pressures are removed – i.e. drive to publish, blowing up the grant application system, graduate schools being more selective to improve the quality (and reduce the quantity) of students to train. More time = more data. And hopefully more quality data too.

    Like

  12. drugmonkey Says:

    Which reminds me Lady Day…I was pondering putting the ethical implications of not publishing work with either nonhuman or human vertebrate subjects into the post. Ultimately I decided it was too much but maybe another post. all those mice, all that breeding….just to end up not ever published because of priority paranoia!!!!!

    It’s unethical to say the least.

    Like

  13. bill Says:

    This member of the choir enjoyed the sermon, DM. Here’s me saying similar things: http://3quarksdaily.blogs.com/3quarksdaily/2007/07/competition-in-.html

    For some more formal inquiries into same, look for work by Brian Martinson, e.g.

    Click to access Competition.pdf

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1483900/?tool=pubmed

    Like

  14. AA Says:

    DM, I know you are not a fan of glamor mags, but I disagree with your comments.

    Yes, I will agree that glamor mags have the highest rate of retractions. However, for the remainder who report legitimate work, their glamor mag should not get thumb-downed like that. In my field, glamor mag pubs are *very rare*. If you are the top in my field, you typically have 2-3 glamor mags (S/N/C) for your entire career, and I’m referring to first author or primary corresponding author papers only. We get glass-ceiling-ed out at PNAS, but for those that do get into glamor mags the work/findings are significant. Sometimes I will criticize that glamor mag pubs in my field get published because of the scale rather than the science (i.e. we did a ton of shit on a scale not done before and found some interesting results). However, that itself is also significant.

    So yes, I think spending 2 years without a single paper and getting a glamor mag is something impressive. Of course, I would also argue that there should be an equivalency, e.g. 1 glamor mag = 1-2 pubs in top of field journals and 3-4 dump journals, and we should not adopt a mentality where 1 glamor mag >> 999 pubs in non-glamor (but still decent) journals.

    Like

  15. DrugMonkey Says:

    Yes well we live in a world where the C/N/S trumps *everything* else. Science has become profoundly perverted by this imbalanced impact (!).

    It is time to fix that and I am not worried about the tender feelings of those who have Glamour Mag Stockholm Syndrome. Y’all can manage.

    Like

  16. Dave Says:

    DM, have you ever tried to publish a negative human study? I have and many times. It is not easy AT ALL. Ethical or not, it’s a major pain in the arse. Try telling the journals that they are obligated to publish a study because it is human. Conversely there are tons of horrible *positive* human data published simply because it is human.

    Nice use of the Muchowski case too by the way!!!!! One of the best non-apologies in history I think.

    Like

  17. GAATTC Says:

    Well done DM. One of your best posts. One thing we could do is include more negative results in papers. Oftentimes we leave out data that conflict with our title statement when in fact sometimes Biology is more complicated than simple yes/no answers. It may be that one cell line shows something but another does not. Rather than bury the negative data, publish it so that other labs will know and possibly come up with an explantion in future work. For this to happen, editors and reviewers need to realize that not everything in science comes in neat bite sized packages, with no gray.

    Like


  18. [Comment edited in line with my aspiration at the outset of this series- DM]

    Oh, yeah? Well: FUCKE YOU, TOO, MOTHERFUCKER!!!!!!!!!!! You fucken wuss!!!!!!!!!!!!!!

    Like

  19. bill Says:

    Dave, have you tried PLoS ONE, BMC Res Notes or PeerJ? They (and there may be others) explicitly disavow “impact” and other Glam Mag bullshit, and should in theory be receptive to a well controlled study that shows a negative result.

    Like

  20. dsks Says:

    I agree with the problem of unpublished results, particularly negative data. To be honest, most scientists should probably be publishing at least 40% of their productivity in jnrbm.

    Like

  21. Dave Says:

    bill, this was during my PhD studies and a few years before those journals were around. We got them out in the end, but it was a struggle. Now PLoS One would be the destination for sure.

    Like

  22. namnezia Says:

    Riiiight. Dude, we have fraudulent papers in Science not being recognized and identified until long after the fraudster has secure an Assistant Professorship and one or two NIH grants.

    It may be that there’s more retractions now because (a) there are more papers being published, (b) it is much easier now to falsify data (maybe) and (c) if you are going to go through the trouble of faking a paper, you might as well go for broke and fake a C/S/N article.

    But you initial point was not about fraud, it was about how C/S/N are the only currency making the competition much fiercer than it typically has been. But I remember when I was an undergrad and went to SfN, my PI prohibited us from talking about our project, espcecially to a specific lab from Germany for fear of being scooped. This ultimately ended as a glamour mag pub, and all the paranoia, competition and secrecy was all there, this being 20 years ago.

    Like

  23. drugmonkey Says:

    Man you Stockholm syndrome dudes are in *such* denial.

    But anyway, what point are you making? That competition of the negative sort existed before? Sure, did I say otherwise? What does this have to do with my point that we should fix the things that drive bad competition to the extent that we can?

    Like

  24. Grumble Says:

    “And much of this work never sees the light of day for various reasons. It is negative. Merely supportive. A blind alley. Or perhaps just of insufficiently amazing interest. ”

    This is actually very worrisome. I’m not sure it’s just because of pursuit of GlamourMag papers; rather, as Dave pointed out, it’s incredibly hard to publish negative data – period, full stop.

    The corrosive consequences of this are the topic of a new book I saw excerpted online recently. If you read past the stuff about drug studies funded by pharma companies almost always yielding results that favor the company’s product (what a fucking surprise), there are some stories about loss of human life that is dircectly attributable to failure to publish negative results. But the problem goes beyond just the human research realm: publication bias is, potentially, an enormous threat to the ability of science to advance – even to its ability to be “efficient”, as the NIH seems to want.

    Like

  25. Mike Says:

    But anyway, what point are you making? That competition of the negative sort existed before? Sure, did I say otherwise? What does this have to do with my point that we should fix the things that drive bad competition to the extent that we can?

    In terms of the Corrosive Competitive Culture of Science, or CorComCulSci, in the past if your work got scooped or even worse, got scooped by someone who disagreed with it, often there was no way to publish it at all. Now anything can get published somewhere, and indexed by Pubmed. That seems like a better scenario for the victims of the CorComCulSci.

    Like

  26. DJMH Says:

    I think you’re conflating cause and effect. If the NIH payline were 25%, there’d still be plenty of competition to get into the glamor mags, just as Namnezia says–it’s just that the *outcome* of publishing in JNeuro or whatever instead wouldn’t be seen as so dreadful.

    I’m personally aware of plenty of fraudulent activity occurring in “lesser” journals, precisely *because* the scrutiny is much less. So I don’t think that the glamor mags, nor the pursuit thereof, are the source of fraud, per se.

    Like

  27. Ian Holmes Says:

    …….really? Empower NIH program officers to set paper quotas? I mean, we all hate secrecy, but….. REALLY?

    I have to say, this reminds me a little bit of a Bay Area phone-in talk show I once heard, where the caller spent 10 minutes berating the military industrial complex and then, when pressed for a solution, suggested that we return to the days of “ancient wise kings and queens who ruled with the guidance of gemstones and astrology”.

    Like

  28. drugmonkey Says:

    I think you’re conflating cause and effect.
    As I tried to make clear, I am talking here about the fact that the negative sort of competition is bad for the NIH. It renders the process it pays for less efficient. I don’t believe I suggested in the post that paylines at the NIH are driving this. Now, do reduced paylines and the willingness for the NIH to go along with the GlamourScience sham tend to throw accelerant onto the fire? Yes.

    I’m personally aware of plenty of fraudulent activity occurring in “lesser” journals
    You may be “aware” of it DJMH but the stats to date do not support your (and PP’s) contention. The data *do* support mine. I also like my rationale for why this is the case a whole heck of a lot better. After all, scientists must at root believe in causality. right? When the fraud-hunters show us that journal IF is uncorrelated with fraud then we can discuss your anecdotes more seriously.

    suggested that we return to the days of “ancient wise kings and queens who ruled with the guidance of gemstones and astrology”.

    I’d say there are definitely people calling for a return to some version of the old days when it comes to the NIH….Michael Eisen had codified the fondness for giving the annointed few essentially sinecure funding for life in his post. I’m not very comfortable with this. I doubt very much that you will find me, in aggregate, seeking solutions that return us to the state of affairs in 1973 or 1963.

    Empower NIH program officers to set paper quotas? I mean, we all hate secrecy, but….. REALLY?

    There has always been a part of the NIH review process in which “productivity” is assessed and rewarded. I’ve made it pretty clear what my problem is with the current system…the inefficiency. Your time would be better spent either showing where my contention is wrong or proposing a better solution, as opposed to complaining about my solution to the problem I’ve identified.

    This, my friends, is the process for advancing in this discussion. Do you or do you not agree with the *problem* that someone has identified? That is critical. Is the problem slightly different? Is it larger? Smaller? Is it a problem but you don’t care about its effects? etc.

    Like

  29. Pascale Says:

    I have sat on study sections where the phrase “they only generated one paper in their previous funding window, but it made the cover of Science” was accepted as adequate productivity.
    Of course, there is also the problem that so many other journals do not want to publish stuff that isn’t positive or that is of the “we found it next” variety.
    Of course, a number of paradigm-shifting observations (for example, the initial description of an atrial natriuretic factor) end up in really obscure, low-impact journals because the reviewers refuse to believe the data.
    I also believe that the least publishable unit has increased exponentially over the last 20 years. It used to be that you could demonstrate a sub aim of a project with several experiments and publish it. Now, reviewers always seem to demand more experiments until you have your whole 5-year grant or more in a single paper, no matter the impact factor. You showed that this receptor transmits effect X of substance Y. That’s “descriptive;” only when you show every intracellular signaling intermediate can you claim “mechanistic.”
    This increase in experimental data requirements demands a large factory lab that can study everything in a grant simultaneously to grind out that sort of publication. It requires lots of hands (thus the need for cheap labor like endless post-docs) and means that negative parts may never be published or, at best, relegated to “supporting data.”
    These increases in data requirements have evolved over the 20 years I have been in this biz. It has a lot to do with the demise of my basic science lab.

    Like

  30. Busy Says:

    Of course, a number of paradigm-shifting observations (for example, the initial description of an atrial natriuretic factor) end up in really obscure, low-impact journals because the reviewers refuse to believe the data.

    This. There was a study which tracked seminal papers that lead to Nobel Prizes of Economics many years later. Nearly all of them were rejected at least once, many twice, and several three times because reviewers refused to bend their mind around the breakthroughs they had just witnessed.

    Like

  31. Jonathan Says:

    I’m still failing to see how this any of this fixes the problems of a large group of people competing for a smaller pool of resources, or how NIH is responsible or able to do so. NIH doesn’t control universities’ decisions to hire or award tenure. NIH doesn’t control universities applying for J-1 visas to bring over Chinese and Indian postdocs who’ll work for buttons. NIH doesn’t even control how much money Congress gives it (indeed, it is illegal for NIH to lobby Congress).

    Either we massively increase NIH funding to pay for all the scientists who want money and just give them grants because they asked* (thereby fixing the competition problem) or we have some kind of selective mechanism to award money to people, and at the same time cull the ranks and turn off the flow of new entries into the system. Good luck making either of those happen in the face of what will be concerted and effective lobbying by AAU, AAMC, and the universities, almost all of whom have large DC offices specifically for lobbying Congress.

    *as long as you’re asking for federal money, there will be bureaucratic requirements, Congress isn’t just going to say “anyone with a PhD can have $125k/yr to do science with.”

    Like

  32. Dave Says:

    as long as you’re asking for federal money, there will be bureaucratic requirements, Congress isn’t just going to say “anyone with a PhD can have $125k/yr to do science with.

    How about if you changed it to:

    anyone with a PhD and a tenured or TT faculty position can have $125k/yr to do science with

    Like

  33. drugmonkey Says:

    I’m still failing to see how this any of this fixes the problems of a large group of people competing for a smaller pool of resources, or how NIH is responsible or able to do so.

    This post is about minimizing the impact of such things on the efficiency with which the NIH gets what they pay for. They are most certainly “responsible” for good stewardship of the taxpayer money they dole out. I made a reasonably specific suggestion for how they can alter efficiency …”productivity” assessment.

    Like

  34. Pinko Punko Says:

    I think it depends on how closely the field is aligned with the possibility of regular CNS papers. If there is a section that has inflated sense of significance that happens to coincide with editorial preferences of those journals, you have got to have the study section not use journal title as shorthand. One member of a section I know does it all the time. I wanted to say: so is YOUR paper published in Top Tier Journal where the interpretations were WRONG more significant than YOUR paper published in MCB where you got the interp correct on the second go?

    UGH

    Like


  35. The data *do* support mine. I also like my rationale for why this is the case a whole heck of a lot better. After all, scientists must at root believe in causality. right?

    Occam’s razor leads me to the causality being that orders of magnitude more people read, cite, scrutinize, and attempt to replicate papers published in Cell, Science, and Nature than they do papers published in Acta Biochemica Moldovania, most of which never get cited (or probably read) at all.

    Like

  36. drugmonkey Says:

    But since it only takes one person, the right one, “scrutinizing” or attempting to replicate papers to identify fraud, there is a jump in your rationale. namely that the more people looking, the better the chance of it ending up as a retraction. That is quite a multi-step leap.

    There is also the problem, with your hypothesis, of the lengthy delay before many frauds come to light combined with the speed with which a given fraudster’s other misdeeds are identified after the first one is nailed.

    This suggests that with all the people allegedly reading CNS, fraud detection is unlikely. Anecdotally, the people doing the original busting in many of these cases are intimately connected to the fields in question. Not just GlamourJockeys who read Nature cover to cover every week. People with very specific interest are the ones that catch this stuff.

    again, all it takes is one person, just has to be the right one. This deflates your popularity theory.

    Like


  37. Dude, are you fucken dipping into the govt ditchweede again? The more people that pay attention to a paper, the more likely it is that one of them will give a shit enough to scrutinize and find errors–intentional or not–sufficient to warrant a retraction. And of course, as soon as one paper is thus scrutinized, then tons of people are motivated to look at others by the same author(s).

    Do you really think that the potential fraudster at East Central Bungholio University who thinks she can make tenure if she just gets one more paper in Acta Biochemica Moldovania before the deadline is any less motivated than the “glamour hound” trying to get another Cell paper?

    Like

  38. Bill Hooker Says:

    “The more people that pay attention to a paper” — intuitively, yes, but in practice how do you explain DM’s point about the lag time between publishing and discovery #1, compared with the rapid uncovering of frauds #2 through #howevermany?

    Also, of the many more people reading Glam Magz, what proportion are really reading — they way you read when it’s your own field, when you tried that experiment and you fucking know it doesn’t work like that figure claims it works? I’m not convinced that Glam Mag papers are subject to more scrutiny at that level, which is necessary to uncover fraud, than are Acta Bullshitica papers.

    Regarding the relative motivations of potential fraudsters, Henry Fucking Kissinger would agree with you (“… precisely because the stakes are so small”) but I’d suggest that the PI at East Central Bungholio is indeed less likely than a Glam Hound at some Ivy League Asshole Factory to think that her desires justify fraud. You don’t take a job at ECBU in the first place, if you think you’re entitled to fame and glory.

    Like


  39. Regarding the relative motivations of potential fraudsters, Henry Fucking Kissinger would agree with you (“… precisely because the stakes are so small”) but I’d suggest that the PI at East Central Bungholio is indeed less likely than a Glam Hound at some Ivy League Asshole Factory to think that her desires justify fraud. You don’t take a job at ECBU in the first place, if you think you’re entitled to fame and glory.

    You really think the Glam Hound at ILAF is more highly motivated to get another Cell paper than the poor shnook at ECBU is to KEEP HER FUCKEN JOBBE?

    Like

  40. ThereIsDataOnThis Says:

    Maybe you want to read this?
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1866214/

    Like


  41. “The more people that pay attention to a paper” — intuitively, yes, but in practice how do you explain DM’s point about the lag time between publishing and discovery #1, compared with the rapid uncovering of frauds #2 through #howevermany?

    This is completely consistent with my causal explanation, because the lag time is even more–and frequently forever–for papers in Acta Bullshitica.

    Like


  42. ThereIsDataOnThis

    Those “data” do not distinguish between the two possible causal models, only the numerous assumptions of the “model” do.

    Like

  43. Bill Hooker Says:

    What I think is that the ILAF Glam Hound is more likely to fucken cheat than the ECBU schnook, because she’s more likely to be an entitled fucken douche. I guess they’re about equally terrified of losing their jobs.

    You’re right about the lag time differential (which is also partly explained by the degree of concentration needed to uncover fraud — after the first instance, other papers go under the microscope).

    Like

  44. drugmonkey Says:

    Acta Bullshitica pubs are easier to attain…and more certain. Less arbitrary. Predictable from the work you are doing.

    So yeah, the pressure is less than for ILAF profs.

    Like

  45. whateverprof Says:

    “In all of this, the pot is sweetened by the competition…. As in sport, the competition makes us work harder, make us work to up our game and motivates our excellence. This speeds the advance of knowledge.”

    Reading this post made me feel sick. I’m not motivated by competition. I actually hate it, and I get seriously turned off of people who try to feed their egos through their work. I realize this is a deficit and I probably need therapy for it. Meanwhile, I try to avoid working with egomaniac wannabe alphas who seem more interested in how good the science makes them look than in actually answering cool and important questions.

    I say this as an assistant prof at a BSD R1 with multiple Glamour pubs. I’m untenured. There’s a totally nontrivial chance I might not make it, but if I don’t, so what? I’ll do something else, maybe somewhere else. Life is too short, and I consider it a privilege to use public resources to work on important questions. And I find it *distracting and disheartening* that you’re maybe preoccupied with the size of my h-factor, that people really walk around wondering who’s better instead of focusing on ideas and getting the science done quickly, efficiently, and humanely.

    I’m female, btw. I’m afraid that has something to do with it. I also grew up in a hypercompetitive environment and managed to do well, although I always, always hated it.

    Like

  46. Bill Hooker Says:

    Whateverprof — you’re not alone and I don’t think you need therapy. I dislike competition too, and have always felt that the second most rewarding and enjoyable part of science is collaboration, and the neat people you get to work with. (The first, for me anyway, is getting totally lost in data analysis, “flow” per Csíkszentmihályi.)

    In fact, a lot of the watercooler conversation here at DM’s is about competition, not because folks like it but because even those who do can see that it’s gotten out of hand, and something needs to swing our collective focus back to ” actually answering cool and important questions”.

    Like

  47. JohnR Says:

    Interesting timing, there. I agree with the fellow who comments that this sort of discussion has gone on a long time, under varying circumstances. Swooping in to snaffle a publication from somebody who is working a bit slower than your group can is an oldish technique. However, your remark “Sadly, some labs even lay a false trail by describing their Methods so incompletely that other labs get a wrong impression of what needs to be done.” happens to have hit a nerve. Just yesterday, our post-doc was complaining bitterly that in order to figure out how to do a new (for her) procedure, she had to go back to protocols from the 1970-1980 period and then work forward to reinvent the wheel. Recent protocols were sketchy, at best, she felt, and it gave me the opportunity to mention to her that back in the 90s we used to joke how certain labs would publish incomplete or misleading protocols in order to sidetrack potential competition. Back then, we thought we were joking. In hindsight, perhaps not so much.

    Like

  48. Spiny Norman Says:

    @whateverprof — everything you’ve said resonates completely. Glad you & others like you are still in the game. Hang in there.

    😀

    Like

  49. drugmonkey Says:

    Whateverprof- I tried to draw a big line under the place where the athletic analogy ends, I.e. competition for competition’s sake. The “good” kind of competition ….well I guess we all have our tolerance level along the line from Kumbaya to Kutthroat. I think the key in all of this is when it crosses from positive, mutually reinforcing motivation to a place where the overall enterprise is being held back.

    Like

  50. Shridhar Jayanthi Says:

    Check the rebuttal from the arsenic life paper, obtained through FOIA: https://www.documentcloud.org/documents/564124-foia2012-nasa-01-dvergano.html. The review is such a sequence of bland responses, I can only imagine how nice these reviewers were. I wouldn’t review my own work as nicely.

    Like

  51. kant Says:

    mah

    Like


Leave a comment