Confidential Comments to the Editor

November 19, 2007

Noah Grey of Action Potential has a good discussion going on the role of the “confidential comments to the Editor” box in the peer review of scientific manuscripts. The lure is as follows:

At the PubMed Plus leadership conference this past June, sponsored by the Society for Neuroscience, the creation of a Neuroscience Peer Review Consortium was proposed. Here is a message from SfN president David Van Essen describing the vision for this new entity:

After an article is rejected by one journal and authors are ready to submit a revised manuscript to another journal, they will have the opportunity and the option to request that the reviews from the first journal be passed directly to the new journal (assuming that both journals are part of the consortium). In many cases, the second journal will be able to reach a decision faster and more efficiently, thereby benefiting authors as well as the overly stressed manuscript reviewing system.

This revolutionary proposal is now a reality, at least for a trial run from January to December 2008.

Go join the discussion it looks interesting.

I found the most interesting comment from Graham Collingridge, Editor in Chief of Neuropharmacology (2006 Impact Factor 3.86). Mostly of course because I keep meaning to blog on the topic he raised a bit obliquely:

Of course the impact factor of the “second choice” journal is likely to be less but impact factor is a divisive influence on the scientific process. What is important is what the scientists think of the actual science, which will be reflected better by the download statistics and, eventually, the citations of the paper concerned (not the average citations of all of the other papers published by that journal over the surrounding 2 year period – note that the influence a given paper has on the impact factor of the journal is, in the vast majority of cases, negligible).

So this reminded me of the fact that many Elsevier journals, including Neuropharmacology, have limited article download stats (e.g., Top 25 downloaded for the past quarter) available. Now this is something that is seriously cool and unbelievably more relevant to assessing the “impact” of a given paper than the mere fact of where it was published. The Elsevier “top 25” site allows some degree of flexibility in searching as you can select a given calendar quarter and gate on a broad topic category (e.g., “Neuroscience”) or a specific journal (aforementioned Neuropharmacology). It is a real pity that this is not more extensive at present time. At the least one should be able to combine the currently available categorizations so as to seek the top article in a longer time frame or from a customized subset of journals. It would be nice to go a lot deeper than the Top25, too. Ultimately, of course, one would wish for fully searchable and publisher-independent stats along the lines of the ISI Impact Factor and Citation reports. One thing a single publisher like Elsevier could do immediately is to make the actual download numbers available instead of the ranks- this would facilitate comparison across publishers.

Why is the assessment of download stats so cool? While it is nice that people cite your papers (inaccurately in some cases!) for whatever reason, isn’t the real point that you want people reading your papers? All those countless undergrads and grad students who may not be publishing anything soon? All those teaching professors who may not be publishing at all? All those scientists in other fields that just found your paper dang interesting? People who want to learn about the things you have discovered about the natural world. Isn’t that an important category of “impact” for your papers?

A little exercise for the reader. Which would you rather have? A paper in a <4 impact factor journal that is “Top 10 download in Elsevier’s ‘Neuroscience’ category for the three quarters after it appeared”? Or, a Nature Neuroscience paper that nobody actually read? Would it make any difference to you if the default CV entry read something like :

Smith, A. , Jones, B. and Doe, C. 2006 An investigation into the function of the gnupi-ergic cells of the Physio-Whimple nucleus. JournalX (2005 Impact Factor 3.6 / 2006 Impact Factor 3.8) 31(4):456-461. [Most downloaded JournalX article for Q3, Q4 2006; 9th most downloaded for JournalX and fifteenth most-downloaded for Elsevier “Neuroscience” Q3 2006 – Q2 2007].

To wrap up, sure there would be some obvious caveats in terms of normalizing comparisions, journal access to the academic public, etc, etc. Sure. There are also many caveats to the “impact factor” and “citation” analyses I’ll remind you. And it wouldn’t stand alone as a replacement for any other schema in particular. I just think it would add some very relevant advantages to our assessment of scientific “impact” of a given paper.

Advertisements

19 Responses to “Confidential Comments to the Editor”

  1. Noah Gray Says:

    To wrap up, sure there would be some obvious caveats in terms of normalizing comparisions, journal access to the academic public, etc, etc. Sure. There are also many caveats to the “impact factor” and “citation” analyses I’ll remind you. And it wouldn’t stand alone as a replacement for any other schema in particular. I just think it would add some very relevant advantages to our assessment of scientific “impact” of a given paper.

    I agree.

    Like

  2. Biogeek Says:

    Don’t wish to be a cynic, but if download stats become important in career advancement, there may be incentive for some to set up click-bots or something similar to increase their numbers. So that would have to be engineered out, somehow.

    Like

  3. Noah Gray Says:

    Better ways to measure scientific impact are indeed needed. This issue needs to get more attention. But one thing at a time, I guess. First we enhance transparency and streamline the review process. Then we can tackle better measurements of individual article influence…

    Like

  4. drugmonkey Says:

    I’ve just been struck for some time now by the fact that Elsevier has this partial version of it. So, for example, if you do happen to get one in one of those top25 lists, then you can already do the sort of bragging that I’ve outlined above. I mean, I’ve seen people reference mainstream media mentions, NewsandViews and other recognitions to argue how great their papers are (e.g., in the grant review context) so really it can only be a matter of time until I see a “top downloads” stat referenced…

    Biogeek, of course. The click-based-web-ad thing has worked through much of this, it is just a matter of using the available tech. how much does this differ, though, from using liberal self-citing to boost your own citation numbers? how much does this differ from you-scratch-my-back citing practices? this is why I acknowledged that there would be caveats and that no system is likely to be perfect.

    Like

  5. Biogeek Says:

    DM, citation counts (web of science or scopus) can be filtered to remove self-citations (easy enough to do manually, anyhow). scratch-my-back citing practices, well this is in some ways in the same category as scratch-my-back paper reviewing. They do call it “peer review”, after all!

    No, no system is perfect. What do you think of the H-index? If you want to compare scientists, can’t you compare their H-indices (in similar field and career stage, that is?)?

    Like

  6. drugmonkey Says:

    Regarding self-cites, yes, I know. I’ve touched on this here in the context of citing yourself for career reasons. The point is whether whomever is examining you will bother filtering self-cites. In many cases they will not.

    Regarding h-index, I’m going back through posts and I can’t remember doing much other than pointing it out as an alternative. The post on freeware Impact Factoring lists some of the h-factor refinements as well although I can’t say I’ve played with those much since first downloading the software.

    H-index is way better than inference by Journal Impact Factor listed on the CV, of course, because it gets to the actual papers themselves. anything that does this is better than “he has three C/N/S papers”. the rub is your “in similar field and career stage” caveat. There is just no getting around this and nobody but nobody is capable of truly accounting for the size of the citation pool in a given subfield when interpreting scientific worth from citation metrics. This is why my “modest proposal” was not entirely a joke.

    Like

  7. physioprof Says:

    Dude, you just ruined my intention to gin up my post on the “PhysioProf Conundrum”, as well as the posts I promised to write on job search chalk talks and one-on-one meetings with faculty. Damn you!

    This is a very germane post for me today, as we just had a paper rejected on first submission without an invitation to revise and resubmit, even though the reviews looked pretty decent. Yeah, there were issues that needed to be addressed, but it looked like the overall enthusiasm was good. So I e-mailed the editor and basically asked him to give it to me straight about the reviewers confidential comments to the editor.

    If they are total shit–which they can be even as the reviewer happy-talks the authors–then there is no point in arguing further. But if they aren’t, then there is room to maneuver. He told me that the enthusiasm level of the confidential comments was not too far from that of the comments to the authors. Based on this, I will probably try to go back in with a greatly augmented manuscript, provided I can convince my post-doc to do some of the stupid shit that the reviewers thought would “further increase enthusiasm for the manuscript”.

    The editors at this particular journal do a very nice job of being very honest with the authors, while still maintaining realism about where they think the reviewers can plausibly be pushed in their enthusiasm. Authors should be very grateful for this, because there is nothing worse than a pollyanna editor who tells you to go ahead and do x, y, and z additional experiments and resubmit, only to have the reviewers pissed off that they had to look at your garbage again.

    There are some additional interesting aspects of this most recent experience, but I fear that I will blow my cover if I get into them. At some point in the future I will try to address these issues when my trail cools off.

    Like

  8. CC Says:

    From my position in the ivory tower of industry, I’d initially thought that DM’s example CV must be a joke on his obsession with impact factors. But looking around, I see that people actually do include them! Highlighted in blue text!

    I’d still suggest that Shay Lifshitz takes it a bit too far — if you have multiple Nature and Science papers, you probably chill out a bit. Including his name might have been a better use of space.

    Like

  9. drugmonkey Says:

    I was trying to be a little over-the-top in part, yes. Shoulda figured!

    But really I was trying to pre-empt the obvious answer to my query, namely “yeah but if nobody knows about the cites, it is better to go with the C/N/s on the CV even if nobody has ever cited it”. if you could communicate the reality of your cite numbers instead of having to rely on the reader’s inference based on the journals listed this would obviate this concern. I notice nobody wanted to answer it anyway!

    oh, and your two links are identical, did you have another example on the second one?

    Like

  10. drugmonkey Says:

    Okay, I’m learnin’ something today, thanks CC! The rest of you, do a little Google for “CV impact factor” and you will be amazed at just how many people do indeed list impact factor after the pub on their CV. Not as many list cites but that one CC found does. Dang.

    Of course you can get a bit bogus with this since there is no obligation to list the impact factor that the journal had when your paper was accepted, which would be the more relevant number.

    Or am I correct in my suspicion that even though many Impact Factors are creeping upward over time the relative journal rank is fixed?

    Like

  11. Biogeek Says:

    To inject a little perspective, in some (many) countries, people are explicitly evaluated according to the impact factors of the journals they publish in. So it’s not surprising in some ways that they would include the IF’s on their CV’s.

    If you are going to list IF, it does make better sense to list the # from the year of publication – of course the IF refers to papers published the prior 2 years, so does that mean for a year 2000 publication, one should list the 2002 IF number??

    It all gets very confusing. I also agree that cites would be more relevant to list (although by definition, these are creeping up over time too).

    Yes, IF’s overall are creeping upward (as the overall research $$/enterprise and literature grows). However, relative journal ranks do shift around over time.

    Like

  12. Noah Gray Says:

    Having recently left academia, but having maintained intimate associations with it (for obvious reasons), I can assure you that search committees for major institutions in the US do not need impact factors listed on CVs in order to decide who to invite for a job interview. In fact, having discussed these types of issues with prominent scientists, doing so is one of the fastest ways to get your CV tossed in the trash.

    Remember, any department has to have a certain amount of social interaction, and even scientists don’t want someone around who is dorky enough to list impact factors…

    Although obviously NPG uses them on the front of every journal homepage, people have an internal list of journal prestige in their head and usually consult that before published impact factors anyway.

    Like

  13. physioprof Says:

    Noah is totally correct.

    Like

  14. Biogeek Says:

    Sure OK this is perhaps true for “major institutions in the US”, but for less-major institutions in places in Asia, Australia and Europe, impact factors are explicitly part of the evaluation process. If not for new hires, then for career progression. I know this to be a fact.

    I’m not in any way defending this practice (indeed I find it rather flawed), just stating that it exists. One possible reason is if the local scientific community is somewhat thinly spread, IF’s at least give some external indication of the “quality” of someone’s science, even if you do not have expertise or detailed knowledge of their field.

    Like

  15. drugmonkey Says:

    Noah and Biogeek, both entirely correct. One question is to predict the future and ask if we are going to look more “dorky” in 8 yrs time.

    But Noah, getting back to the real point, perhaps with your perspective you might address the question I was asking. Which would convey more “impact/significance” in the sense of feeling great about your contribution? A paper in a universally acknowledged prestige journal that nobody read or cited? Or a paper that was published in an “Acta” that was in the top25 downloads for Neuroscience for a year?

    Like

  16. whimple Says:

    This is an easy call: take the paper in the prestige journal every time.

    Like

  17. drugmonkey Says:

    Why whimple?

    and again, this was where I asked if it would make any difference to your view if download or actual per-paper cite stats were part of the accepted, default CV?

    Like

  18. Noah Gray Says:

    Well, remember that downloads and citations don’t always really mean impact. It may just mean that people liked the way you stained your protein gels, but your results were as boring as hell. So you have to see through the citation numbers, since they don’t always tell the whole story.

    I think you would have to gamble with the low-cited prestigious journal paper since people are still going to hire you with that on your CV. Realistically, any paper in a prestigious journal is going to make some sort of a splash simply because of the visibility, even if it isn’t necessarily cited as many times as the “Methods in Enzymology” recipe book. Neither my reviewers nor myself have done our job of servicing the community if we accept a paper that doesn’t get read. So yes, I’d still shoot for the paper in the high impact journal and take my chances…

    Like


  19. […] point I intend to make today, Dear Reader, is related to some prior thoughts of mine on the potential use of paper download statistics as an alternative to journal Impact Factor in […]

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: