Our problem is an "Avalanche of Low Quality Research"? Really?

June 18, 2010

The doyenne of all that is prof-blogging has a first rank take down of some idiocy posted on the Chronicle of Higher Education. A handful of professors of English, mechanical engineering, medicine, management, and geography have concluded that the greatest threat to our body scientifique is that:

the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs.

I mean seriously. This is a huge (HUGE!!!11!!!) problem, is it not?

Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.


PaperAvalanche.jpg
source
I’m not going to take on all of their strained points, FSP has done a great job and a commenter going by jabberwocky is just slaying in the comments at the Chronicle (no links but page to comment 4 and 16).
The authors suggested change is basically to expand the format of GlamourMag science to everything. Shorter papers with supplemental materials online. Fewer papers that encompass a greater amount of actual work and data (albeit without actually showing much of it). Only publishing in higher impact factor journals.
One of the things they seem to be overlooking, and a thing that is one of my most sustained criticisms of GlamourMag science, is this. When you don’t show your work, when all the effort that went into validating, closing off dead alleys, verifying reagents, refining techniques is invisible…


Somebody. Else. Is. Going. To. Have. To. Repeat. It. At. Great. Expense. In. Time. And. Money.

I know, I know. These idiots over at the Chronicle think that they can magically predict a priori what anyone else is ever going to be interested in but this is totally and completely off base. I look at threads of data or research all the time from 5, 10 or 30 years past that never got much attention. Never got cited. And sometimes it helps me with my present work. Sometimes the fact that it is invisible explains why there is a whole mini-area of research going off the wrong way. Sometimes I know of data that never got published for one reason or another that re-directs the foundering research of someone in my lab or elsewhere.
All of this suggests to me that efforts to further reduce the public and accessible appearance of data that has been generated by the research community, are very much to the future detriment of science.
I mean honestly. Have these clowns never heard of PubMed? How hard is it to synthesize the figures that you are really interested in from five papers instead of two? I do not find this to be the major impediment to my ability to understand my scientific field. I really don’t.

No Responses Yet to “Our problem is an "Avalanche of Low Quality Research"? Really?”

  1. JohnV Says:

    I marvel at this comment on the chronicle:
    “I think we should encourage submissions that emulate the style of 100 years ago when a paper could be 100 pages or more.”

    Like


  2. Dude, the people that write that garbage over at the Chronicle of Narnia or whateverthefuck that shithole is called are just a bunch of dumbfuck humanities douchebags, anyway. I don’t know why you and FSP are bothering to even read their inane drivel.

    Like

  3. zoubl Says:

    They don’t seem to consider whether a lack of citation reflects poor science (which is assumed) or a rampant lack of scholarship, which is inexcusable. Some people seem to limit their reading to so-called glamourmags and nothing else.

    Like

  4. pietraski Says:

    I think that our major problem is a shortage of critical thinking coupled to a tendency towards a self-serving advertising of what is best, high or low-quality research. One of the authors/professors has 6 publications since 1975 and all of them are in Science.
    I Know nothing about his field. I suspect that he must feel very happy about it and he is sharing his views.

    Like


  5. Uh, DM – I hate to break it to you, but PubMed is focused on subjects that have a medical impact (although, sometimes it’s pretty tangential)
    Consequently, there are lots of journals that are not indexed in PubMed.
    The medical professors have no excuse, but in the other areas like engineering and management, those journals are probably not in PubMed. Just like many plant biology, ecology, education, engineering, social science, and environmental journals.
    So, it’s quite possible that those professors didn’t get the memo.

    Like


  6. Just like many plant biology, ecology, education, engineering, social science, and environmental journals.

    Well, like I said, those are just a bunch of humanities douchebags posting over there at the Chronicle of Narnia. Fuck ’em.

    Like

  7. Eric Lund Says:

    I don’t know why you and FSP are bothering to even read their inane drivel.
    I can’t speak for either DM or FSP, but somebody has to call the pompous gasbags who wrote that opinion piece on their nonsense. After all, some of them might be deans or aspiring deans who sit on promotion and tenure committees–maybe even yours. I mean, these people actually take impact factors seriously, because IF is a Rigorously Calculated Number (TM) whose inventors claim it can distinguish good research from bad research (aren’t those review papers awesome?). It was amazing to see in the article comments how resistant the lead author was to jabberwocky’s clue-by-four.

    Like


  8. Dude, no “deans” or “aspiring deans” are wasting their time hanging around at Chronicle of Narnia. Gimme a fucking break. It’s just a bunch of frustrated humanities douchebags who are too lazy to even work all fucking summer because they only get “nine months” salary.

    Like

  9. David Says:

    I completely agree with the Chronicle.
    There’s a barrage of useless work by other people gumming up the publication landscape, and preventing my own research, vastly more important, from being published as the lead article in Science. Where it belongs.

    Like


  10. Dude, I hear that behind that door, there is a magical wagical world where *everyone’s* papers get into Science.

    Like

  11. Odyssey Says:

    Dude, I hear that behind that door, there is a magical wagical world where *everyone’s* papers get into Science.
    Actually CPP, there’s just a crapper behind that door.

    Like

  12. DrugMonkey Says:

    PubMed is focused on subjects that have a medical impact
    Yeah, it was an example. Like other disciplines haven’t figured out databases to search for relevant literature yet? Actually, I guess it *is* possible. It just seems startlingly unlikely to me.

    Like

  13. JohnV Says:

    Wait … if I want into humanities I’d get the summer off?

    Like

  14. Eric Lund Says:

    It’s just a bunch of frustrated humanities douchebags who are too lazy to even work all fucking summer because they only get “nine months” salary.
    Sad, to say, I have seen similar attitudes in the pages of Physics Today. (See M. Gad-el-Hak, “Publish or Perish–An Ailing Enterprise?”, Physics Today, vol. 57, no. 3, pp. 61-62 (2004), which you can get from their website at www dot physicstoday dot com). That’s not just an ordinary magazine, that’s the official society magazine of the American Institute of Physics. At the time he wrote the opinion piece in question, Prof. Gad-el-Hak was chair of the Department of Mechanical Engineering at Virginia Commonwealth University. And yes, he also promotes the use of impact factor in evaluating journals.

    Like

  15. Eric Lund Says:

    Correction to my post #14: physicstoday is an org, not a com.

    Like

  16. drdrA Says:

    Ha ha ha. I loved comment #16. I went back and cheered 3x. Just for yucks the day that Chronicle article came out I went back and looked up the Marshall&Warren article on Helicobacter published in the Australian Journal of Medicine… IMPACT FACTOR ❤ in the early 2000s … so who knows what its impact factor was in 1985 when that Nobel Prize winning work that completely changed treatment for gastric ulceration -was published there.
    Impact factor my ass.

    Like

  17. FSP Says:

    I like the Chronicle for news about academe, and some of the columns/essays are of interest, but there are some wacky ones as well. I wouldn’t disparage CHE as a whole just because of essays like the one that I “took down” (is that a sports term, by the way?).

    Like


  18. I wouldn’t disparage CHE as a whole just because of essays like the one that I “took down” (is that a sports term, by the way?).

    Yes, FSP. You are totally kicking motherfucking ass in the sports analogy department. It’s good to see you staying within yourself and giving it 110%.

    Like

  19. Lorax Says:

    One reason many articles are not cited is because too many editors and reviewers allow lazy PIs/post-docs/grad students to cite review articles. This is a problem, since the authors are counting on the author of the review to be correct, they are screwing the people who actually did the work for (generally) a bigwig who wrote the review.
    Maybe this is a small problem, but its still a problem.

    Like


  20. Coming from a field where the highest journal impact factor is

    Like


  21. Ack, the less-than sign screwed up the top paragraph (guess it was interpreted as a failed HTML code).
    What I meant to say there:
    Coming from a field where the highest journal impact factor is less than two, I have a only a big FUCK YOU to the fuckwits who wrote that Chronicles article. Considering that a medical journal full of obscure and often questionable clinical observations, like the various objects found in their patients’ asses, scores a higher IF than long hard painstaking work on underfunded and underpublicised topics like eukaryotic evolution, I find it *insulting* that anyone can argue that IF is any indicator of ‘quality’. Fuck off.

    Like

  22. MRW Says:

    Take a look at the authors’ publication records. In particular, the mech. engineering professor’s publications in 2009 consisted entirely of publications in journals with impact factors of 0.99, 0.63, 0.60, & 1.50.
    I don’t mean to disparage his research (I haven’t read the articles) but why is someone publishing in such journals signing an editorial like that one?
    (I posted something similar over there)

    Like

  23. GMP Says:

    MRW, one has to be sensitive that impact factors vary *dramatically* with the field size (how many people/funding there is). It could be that mechanical engineering journals, or those in the author’s area, simply never have super high IF’s. But it can certainly be that, as you implied, the ME author’s publication venues are really well below the standard of excellence in his field… I suppose we need to hear from someone close to the that person’s field?

    Like

  24. canuckistanian Says:

    I work in a biodefense field that had essentially died in the 1960s. In 2000 I was the only PI with an NIH grant in the field. Post 9/11 every man and his dog joined the fray. My citations went through the roof (relatively speaking of course)as did many papers published in the 1950-1970 era. I wonder what the latter authors think of the posthumous peak in their citation rates?

    Like

  25. G Canterbury Says:

    Yes. NIH Research Matters. And it should be glamour magazine independent. Curiously, there is a comment at the NIH web (NIH Research Matters) on a recent paper published in Science 328:1288, 2010: “Induction of Fear Extinction with Hippocampal Infralimbic BDNF”.
    This is the link to NIH:
    http://www.nih.gov/researchmatters/june2010/06212010anxiety.htm
    Just a few thoughts:
    1. In the Science paper: it would have been great to study the effect of BDNF using additional controls (e.g., other growth factors, non growth factors) other than just saline.
    2. The work supports the idea that the brain (rats) is responsive to contextual/ environmental “repairing” stimuli (i.e., extinction training). Are the genetic, environmental and molecular basis for the so-called “extinction training” known?. Have they been the subject for rigorous scientific research so far?.
    3. I don’t know about “many lines of evidence implicate BDNF in mental disorders” but the idea that ”medications could be developed to augment the effects of BDNF (whatever those effects might be in “treating” fear) appears to be in line with Dr Insel’s apparent wishful thinking and faith on pharmaceutical opportunities without a rigorous and consistent scientific advancement in understanding mental disorders.
    4. Yes, we could invent a pill to substitute for training human beings in everything that requires effort, creativity and most of all confidence in the human potential to overcome physical, emotional and intellectual challenges. It is quicker and socially better to make us dependent on “behavioral modifiers” like the ones exemplified by Nemeroff ‘s research. It would certainly be extremely convenient for certain greedy economic powers.

    Like

  26. MRW Says:

    GMP, You’ve completely missed my point. I never implied that the author’s publication venues were below the standard of excellence for his field. I specifically said that I didn’t mean my comment as a criticism of the professor’s research, perhaps I should have also said, “or the journals in which he publishes it.”
    My point was that the author’s actions don’t match the prescription for the problem offered in the piece he signed. The article says the problem is that people are publishing things that don’t get cited. No qualifiers for field variability are offered. It then advocates placing *more* emphasis on IFs (suggestion #2) as a major part of the solution.

    Like


  27. I don’t think that flashy, “top-of-the-line” journals are better. They tend to be shorter, more biased (as well as less aware of and up front about their bias), and cut out much fruitful discussion in order to jam as much jargon-laden information as they can into 5 pages. To get down to the character limit, authors tend to cut out everything but the results. Who cares if the experiment has glaring flaws? I need to fit it into 5 pages, so I’ll cut out my discussion of them. Who cares about replicating my methods? I’ll chop those out, too. I am personally tired of big-journal articles that don’t explain their methods and glaze over discussion of possible confounds. It might be fine science, but with the glut of bad science writing, there’s no way to tell.

    Like


Leave a comment