Confidential Comments to the Editor
November 19, 2007
Noah Grey of Action Potential has a good discussion going on the role of the “confidential comments to the Editor” box in the peer review of scientific manuscripts. The lure is as follows:
At the PubMed Plus leadership conference this past June, sponsored by the Society for Neuroscience, the creation of a Neuroscience Peer Review Consortium was proposed. Here is a message from SfN president David Van Essen describing the vision for this new entity:
After an article is rejected by one journal and authors are ready to submit a revised manuscript to another journal, they will have the opportunity and the option to request that the reviews from the first journal be passed directly to the new journal (assuming that both journals are part of the consortium). In many cases, the second journal will be able to reach a decision faster and more efficiently, thereby benefiting authors as well as the overly stressed manuscript reviewing system.
This revolutionary proposal is now a reality, at least for a trial run from January to December 2008.
Go join the discussion it looks interesting.
I found the most interesting comment from Graham Collingridge, Editor in Chief of Neuropharmacology (2006 Impact Factor 3.86). Mostly of course because I keep meaning to blog on the topic he raised a bit obliquely:
Of course the impact factor of the “second choice” journal is likely to be less but impact factor is a divisive influence on the scientific process. What is important is what the scientists think of the actual science, which will be reflected better by the download statistics and, eventually, the citations of the paper concerned (not the average citations of all of the other papers published by that journal over the surrounding 2 year period – note that the influence a given paper has on the impact factor of the journal is, in the vast majority of cases, negligible).
So this reminded me of the fact that many Elsevier journals, including Neuropharmacology, have limited article download stats (e.g., Top 25 downloaded for the past quarter) available. Now this is something that is seriously cool and unbelievably more relevant to assessing the “impact” of a given paper than the mere fact of where it was published. The Elsevier “top 25” site allows some degree of flexibility in searching as you can select a given calendar quarter and gate on a broad topic category (e.g., “Neuroscience”) or a specific journal (aforementioned Neuropharmacology). It is a real pity that this is not more extensive at present time. At the least one should be able to combine the currently available categorizations so as to seek the top article in a longer time frame or from a customized subset of journals. It would be nice to go a lot deeper than the Top25, too. Ultimately, of course, one would wish for fully searchable and publisher-independent stats along the lines of the ISI Impact Factor and Citation reports. One thing a single publisher like Elsevier could do immediately is to make the actual download numbers available instead of the ranks- this would facilitate comparison across publishers.
Why is the assessment of download stats so cool? While it is nice that people cite your papers (inaccurately in some cases!) for whatever reason, isn’t the real point that you want people reading your papers? All those countless undergrads and grad students who may not be publishing anything soon? All those teaching professors who may not be publishing at all? All those scientists in other fields that just found your paper dang interesting? People who want to learn about the things you have discovered about the natural world. Isn’t that an important category of “impact” for your papers?
A little exercise for the reader. Which would you rather have? A paper in a <4 impact factor journal that is “Top 10 download in Elsevier’s ‘Neuroscience’ category for the three quarters after it appeared”? Or, a Nature Neuroscience paper that nobody actually read? Would it make any difference to you if the default CV entry read something like :
Smith, A. , Jones, B. and Doe, C. 2006 An investigation into the function of the gnupi-ergic cells of the Physio-Whimple nucleus. JournalX (2005 Impact Factor 3.6 / 2006 Impact Factor 3.8) 31(4):456-461. [Most downloaded JournalX article for Q3, Q4 2006; 9th most downloaded for JournalX and fifteenth most-downloaded for Elsevier “Neuroscience” Q3 2006 – Q2 2007].
To wrap up, sure there would be some obvious caveats in terms of normalizing comparisions, journal access to the academic public, etc, etc. Sure. There are also many caveats to the “impact factor” and “citation” analyses I’ll remind you. And it wouldn’t stand alone as a replacement for any other schema in particular. I just think it would add some very relevant advantages to our assessment of scientific “impact” of a given paper.