Citing review articles robs the authors of original research articles. So stop it.

September 13, 2013

Someone or other on the Twitts, or possible a blog comment, made a remark about academic citation practices that keeps eating at me.

It boils down to this.

One of the most fundamental bits of academic credit that accrues to authors are the citations of their research papers. Citations form the ballyhooed h-index (X papers with at least X cites each) go into the “Highly Cited” measure of awesomeness and are generally viewed as an important indication of your impact on science.

Consequently, when you choose to cite a review article to underline a point you are making in your own article, you are taking the credit that rightfully goes to the people who did the actual work, and handing it over to some review author.

Review authors are extracting surplus value from the people who did the actual creating. Kind of like a distributor of widgets extracts value from those people who actually made them by providing the widgets in an easy/efficient location for use. Good for them but…..

So here’s the deal. If you are citing a review only as a sort of collected works, stop doing that. I can make an exception when you are citing the review for the unique theoretical or synthetic contribution made by the review authors. Fine. But when you are just doing it because you want to make a general “..it is well established that Bunnies make it to the hedgerow in 75% of baseline time when they are given amphetamine” type of point, don’t do that. Cite some of the original authors!

If you really need to, you can cite (Jo et al, 1954, Blow et al 1985, Moe et al 2005; see Pig and Dog, 2013 for recent review).

Look at it this way. Would you rather your papers were cited directly? Or are you okay with the citations for something to which you contributed fundamentally being meta-cites of some review article?

51 Responses to “Citing review articles robs the authors of original research articles. So stop it.”

  1. SidVic Says:

    I’m flabbergasted with the laziness I see. Good lord, how much trouble is it to pull the reference form the review, AND THEN pull up the paper to assure yourself that it makes the correct point. That said sometimes getting a handle on the literature is like drinking from the proverbial fire hose. Nevertheless I personally get satisfaction from tracking down that 1918 Teutonic article that was the first to put forth some idea. Conversely, I make a note when I see an article that is well-referenced that this might be an author to take seriously. A lot of chaff out there…

    Like

  2. CS Says:

    I pretty much follow your (Jo et al, 1954, Blow et al 1985, Moe et al 2005; see Pig and Dog, 2013 for recent review) example in my citations practices. I think it works well for the reader (directing them to useful papers) and for giving the original authors of empirical works proper citation credit.

    On the other hand, I write a lot of review articles, and do appreciate citations to them. It is rare though that the citation actually seems to draw on any of the ideas that I proposed in the review — it is almost all (see X, 2008 for review). My very first article was a review, and is still by far my most cited — some day I’ll do an analysis of how it was cited and whether my (IMHO novel and important) ideas actually seem to have had any influence on the field.

    Like

  3. drugmonkey Says:

    Why do you write so many reviews CS? are you trying to take prior credit in case anyone happens to actually do some work to test “your” ideas?

    Like

  4. BioDataSci Says:

    The journals themselves (especially those that are printed) are part of the problem. If you have a citation limit, you may not have space to cite every paper that’s relevant, so it’s more convenient (although I agree not better) to cite a review.

    Like

  5. Mikka Says:

    +1 to losing the citation limits. They are a holdover from the dead-tree journal days. Cite all the literature that is pertinent.

    This will not only help primary literature in general, but also reduce the gaping chasm in impact between CNS and the rest. Many citations should go to several papers but only go to the CNS ones to impress the reviewers into believing that the topic is important.

    I would even add brief comments to the citations, ie: “this paper shows x and y”. That would be awesome.

    As a self-limitation mechanism, keep in mind that citing too much makes you look amateurish and strains credulity of your reading abilities.

    Like

  6. dsks Says:

    “I’m flabbergasted with the laziness I see. Good lord, how much trouble is it to pull the reference form the review, AND THEN pull up the paper to assure yourself that it makes the correct point.”

    Right on. Again, back in the day an appeal to pragmatism might have excused a lack of thoroughness here; i.e. do I really want to go all the way down to the stack, pull out and read a paper from some musty old Pflugers Arch tome predating the Boer War just to verify the content summarized in a recent thorough review just for a single citation at the end of a single relevant sentence in my thesis?*

    These days… Click, click, click, read .pdf, done. Next sentence.

    * My PhD mentor’s response was, of course, “Yes! To the basement with you, boy!” 😦 I swear I’m still coughing up the dust particles my lungs accumulated poring over manuscripts in that cursed place.

    Like

  7. pyrope Says:

    I’ve written a few reviews – I think they are useful distillations of a broad array of literature. Maybe reviews in your field are a lot narrower than in mine, but I think that beating up writers of review articles as scientists who don’t ‘happen to actually do some work’ is crappy. It takes a lot of work to synthesize a full set of literature and the process of doing so can lead to substantial insight.

    As a comment on your initial point – citation limits are common in my field and make it nearly a requirement to cite only review articles for points that are not central to the theme of the paper.

    Like

  8. CS Says:

    Why do I write reviews? Because I am really good at it. My reviews are not a boring summary of a few previous empirical results — I think there is no point in reviews like that. My reviews are broadly integrative and synthetic. They propose theories and suggest new experiments. In my last paper I evaluated the function of a region of the brain I study that gets little current attention from other researchers: I brought together old tracer studies in rodents, monkeys, and humans, electrophysiology in all three organisms, human neuropsychological studies of impairment following brain damage, human neuroimaging, computational modeling, and other methods to clearly summarize what scientists do know about what this structure does, and propose what we still need to study in the future. I’ll probably write a grant application directly focused on this structure now that I have developed the theory (and I have preliminary results from actual studies I have performed) but I believe that the theoretical review was worth publishing currently.

    I am not in biomedical sciences; rather, I do cognitive neuroscience and am housed in a Psychology department. Psychology has a long history of publishing theoretically based reviews and holding them in high esteem, and no one thinks of it as trying to claim ideas for your own before someone else tests them empirically. It honestly never occurred to me until you accused me of doing that, DM. Your research area is very strange if all that matters is really empirical results and no one cares about theory and relating results to each other.

    If someone else then scoops me and does the experiment and publishes it before I get to it — that is life, and I would give them full credit. In cognitive neuroscience there are usually multiple ways to test a theory using a variety of tasks and methods, and there is usually room for all to ultimately be published.

    Like

  9. SteveTodd Says:

    Integrated tools should make it easier to assign credit or at least look at the citation train. For example, if your work gets cited in a review, then that review gets cited a bajillion times, you should get some percentage of that credit. This obviously starts to break down when reviews cite other reviews, but imagine the exponential effect those crafty Brazilians could have cooked up in that situation.

    And (I hope to not ruffle too many feathers with this) maybe sometimes reviews are cited because of a lack of access to the original paper.

    Like

  10. Dave Says:

    Why do you write so many reviews CS? are you trying to take prior credit in case anyone happens to actually do some work to test “your” ideas?

    Ouch. Such a snob.

    What about citation limits in some journals?

    Like

  11. Dave Says:

    ……..or you can just write the reviews that everyone cites in addition to doing the original research 🙂

    One that we recently published has been cited 35 times in less than a year, which ain’t bad.

    Like

  12. drugmonkey Says:

    Psychology has a long history of publishing theoretically based reviews and holding them in high esteem, and no one thinks of it as trying to claim ideas for your own before someone else tests them empirically.

    Oh nonsense. People cite and talk about those self-same “theoretically based reviews” as if they are unique brilliant insights of the authors in question. Write a good review that presages a coming area and you are da brilliant ONE that came up with it. Whether you had a whole bunch of company in those thoughts or not. Whether someone else said essentially the same thing in a prior Discussion section or not.

    Sometimes, of course, this is highly deserved because the person writing the review is also going hot and heavy on the topic area. Sometimes, maybe the person does have unique insight. but maybe also sometimes if you keep throwing up airballs eventually one of them hits the net. e.g. It is rare though that the citation actually seems to draw on any of the ideas that I proposed in the review

    Your research area is very strange if all that matters is really empirical results and no one cares about theory and relating results to each other.

    GrandeTheoryeElevene!11!!!!111 has its place. But don’t try to conflate theory with “relating results to each other”. A proper Discussion and integration of empirical findings doesn’t always need some high falutin theory.

    GrandTheoryes have some significant drawbacks. First, see Hauser and Stapel. Trying to “prove” a GrandTheorye that you have committed yourself to is a potential driver of fraud and fabrication. Second, it can suck all the air out of a subfield, making it entirely too conservative and, potentially worse, self-referential. If you don’t work within a highly popular framework, well, screw you. no money, no pubs. The history of experimental psychology is pimpled with these issues, dating back to Freud at the very least. Theories enforce orthodoxy…and therefore conservatism and boringness. Third, in a very practical sense, a lot of time spent working on those grand theory review papers might be better spent running a few more experiments that go further to test those theories.

    Like

  13. Alex Says:

    Reviews are fine to cite in intro sections, to make general points for readers who might be wandering in from other fields. Sometimes you want to motivate a study by saying “X is a widely-observed phenomenon…” and the right way to do it is to cite a few key papers that demonstrate X in contexts most directly relevant to your paper, and a review to give broader context for inexperienced readers (e.g. n00b grad students) and readers new to the sub-field.

    Reviews might also be useful for methods, if you want to say “The approach taken here is standard and will make it easy for us to compare our work with other reports.” Cite a review that makes it clear how widely-used your method is, and cite a few key papers that are most directly comparable to your work. That way you show broad applicability AND you also cite the specific results that you will be doing direct comparisons with.

    Sometimes reviews might be appropriate in the analysis section, to put certain pieces of data in context. For instance “Besides [whatever signal or effect you are studying in depth] we also observe [something widely-known], which is consistent with most other observations.” Citing a review there is appropriate. You want to make the point that besides the variables you are focusing on, everything else is checking out with common observations (which is an indication that you are probably on a sound footing).

    Where reviews should rarely/never be cited is if you want to make a very specific point, especially a point that is crucial to your reasoning. There you need to cite specific papers, so that a reader can delve in and see if the reports you’re citing match up with what you’re claiming.

    Like

  14. Alex Says:

    A more fundamental issue:

    Any widely-accepted fact of science is the result of synthesizing results from many, many investigations. How do you know that DNA is a double helix, that carbon has 4 valence electrons, that animals are made of cells, that cell membranes are made of phospholipids, and a hopping bunny experiences a gravitational force equal to mg? You know these things because of hundreds of experiments conducted by countless people. You presumably either don’t cite these facts, or else you cite a textbook. The textbook is a synthesis of countless investigations. In citing it, you are denying credit to the countless individuals who did the experiments that led us to those facts.

    And sometimes textbooks are wrong and deserve to be discarded in favor of more recent results. Despite that, all of us are going to go right on assuming (without citation) that DNA is a double helix, until it is proven otherwise.

    Like

  15. Christina Pikas Says:

    There’s actually a name for it: “palimpsestic syndrome” coined I believe by RK Merton in On the Shoulders of Giants.

    Like

  16. Grumble Says:

    DM, I disagree with your point of view to the extreme. For one thing, there’s Alex’s point that there are some findings that are so well-established that one just doesn’t need to cite all the original papers. Or there might be 2 or 3 main points of view in the field, which were crystalized by a few review articles back in 1978, or 1999, or 2007. Yes, there might be a TON of papers that led up to those ideas and that follow them, but I think it’s perfectly valid to cite reviews that are (or were) influential.

    As for this: “Write a good review that presages a coming area and you are da brilliant ONE that came up with it. Whether you had a whole bunch of company in those thoughts or not. Whether someone else said essentially the same thing in a prior Discussion section or not. ”

    This is just silly. You might have a brilliant insight that you try to get across in a discussion section. Because of length limitations, or because you’re a crappy writer, you might not say it very well, or develop the idea thoroughly. So when I come along with my review where I say “the same thing” clearly, emphatically, and understandably, and take the time and space (and effort) to present all the supporting data for that idea – well, you and I haven’t really said the same thing, have we? The kernel idea might be common, but I think it’s perfectly legitimate for people to cite my beautiful review and ignore your awkward discussion section (which was part of a paper that probably contributed only a small part of the whole story anyway).

    Finally, from my own experience: my papers that get cited the most are reviews. I don’t write a lot of them, but some of their perspectives are unique, or they provide a needed summary of the literature that no one else had thought to put together, or whatever. I think I deserve the citations, thank you very much.

    Like

  17. DJMH Says:

    Dude, aren’t you the one always arguing that it is better for science if you publish in faster, low IF journals and then synthesize your brilliant ideas in a review???

    Like

  18. Dr Strangely Strange Says:

    All excellent points. There is value in the big picture offered by a retrospective look such as a review by sone elder statesman since in principle, it should not only track that X was demontrated by Y and Z but that X was demontrated and reproduced independently by Y’ and Z’.
    There are also clearly more people that contributed to an idea than the authors of one or two papers, but from my extremely narcissistic perspective I hope my work is never reviewed by anyone other than me and that others stick to verifying my findings or doing something original for themselves. Nothing mote frustating than having a review of your work being cited more often than your work….. Guess that could also mean I am crap at selling my stuff….oh well

    Like

  19. Pinko Punko Says:

    I like citing original papers. Journals like EMBO encourage you to do so. Other journals count references against your word limits. For points that are settled in the mysts of time and the citation relates to current questions or goings on in the field, they a review is the right thing to cite for a manuscript. If it relates to a critical observation, then the original papers should be cited.

    “A wealth of structural studies have allowed us to…[either a review or 30 papers]”

    Other ways to give credit to the readers of you paper to the appropriate source is to name the groups even if you have to point to a review for people to get all 15 citations. I like the Jo, Blow, Moe then recent review, but it is likely there that you are creating a stereotype for the JBM papers to be cited and everything in between their work and the review is slighted. There is no way to be perfect with this, but one thing you might do is cut the shtick when you talk about it.

    Like

  20. Alex Says:

    Just yesterday I read a review article with a very nice illustration of a point that is widely known in the field, shows up in every study, but is not well-explained in textbooks. When writing a paper, if I want a certain point to be understood by n00b readers why not cite that review?

    Like

  21. Eli Rabett Says:

    Pinko has it right, what is the due date on a bunch of original papers when you have a good review to cite, or do you have to go back to the year dot? Citing something from Dalton in a mass spec paper would be fun tho.

    Like

  22. The Other Dave Says:

    I liked it when this blog was more about The Way Biomedical Science Really Works — and how aspiring scientists might best negotiate that reality. I don’t think bitter essays like this one are as informative or interesting.

    Instead of bitterly complaining about the tendency of people to cite reviews (which ain’t gonna change any time soon), why not talk about how to write primary research articles that attract a lot of citations? I think this would be very interesting. What sorts of titles, abstracts, and discussions are most useful and citable?

    Or how about a thoughtful post about writing review articles? When is it appropriate? What are good review articles like? Is it worth writing a review to sum up a long line of one’s own work, in order to clarify and popularize it? I think that would attract a lot of interesting discussion regarding what sort of review articles people find most useful.

    Like

  23. Nat Says:

    “…Instead of bitterly complaining about the tendency of people to cite reviews (which ain’t gonna change any time soon), why not talk about how to write primary research articles that attract a lot of citations? I think this would be very interesting. What sorts of titles, abstracts, and discussions are most useful and citable?

    Or how about a thoughtful post about writing review articles? When is it appropriate? What are good review articles like? Is it worth writing a review to sum up a long line of one’s own work, in order to clarify and popularize it? I think that would attract a lot of interesting discussion regarding what sort of review articles people find most useful.”

    Great, you have a plan for the first few posts of your new blog. I’ll be checking it out!

    Like

  24. iGrrrl Says:

    Guess what, The Other Dave! This discussion of reviews is, in fact, a post on “The Way Biomedical Science Really Works”.

    When a whole lot of data are summarized in a review, and what is pertinent to your paper or grant proposal is in fact the summary, then cite the review. Note that it is a review (reviewed in Smith, et al., 2012). Pinko Punko made this point nicely. DM made the point that if you cite an idea from the authors of the review, based on their synthesis of the field, you certainly cite that idea by citing their review. However, if you cite one piece of original data that is discussed in a review by citing the review, and not the original paper where the data appeared, you’re doing it wrong. Period. No excuses. Yes, this is how it really works.

    Look, there are practical reasons for this, not just ‘more-scholarly-than-thou’ reasons. First, if a reviewer of a grant proposal or publication is the person who originally published those data, they are NOT going to like having their work attributed to the authors of the review. This will hurt you. Second, if you don’t bother to go look at the original paper, you will likely miss something important. Maybe there are other data in that paper that have bearing on your work, but you won’t know about it unless you look at the paper. Third, maybe when you look at the data in context of the paper and your own work, you might decide that you don’t agree with the interpretation given by the author of the review. This can also have bearing on your work.

    In other words, IMO and IME, citing primary data via a review and not the original publication is 1) lazy scholarship; and 2) an intellectual and practical disservice to yourself.

    Like

  25. anonymous postdoc Says:

    Unpopular opinions alert!

    I like writing review papers, if for no other reason than the opportunity to synthesize my own thoughts on an issue and get them down on paper, along with the relevant references, so I don’t forget them. This approach seems to be helpful since these papers have been well cited and people tell me they liked them. But I wrote it for the benefit of yours truly, albeit with an agenda of helping people see things my way. No one might see that I put a Grande Theorye in there, but I think it might make people more receptive to my Lessere Theoryes in discussion sections if they’ve heard it before.

    I also like citing review papers, and I’m sure I’m contributing to the death of science, boo hoo. Frankly, it’s nearly impossible to keep up with literature outside of a fairly narrow subfield at this point. What proportion of non-seminal original research articles continue to be highly cited more than 5 years post publication? Probably not too many. The intervening 5 years have likely resulted in the publication of many dozens more papers directly related to your topic of interest, as well as many hundreds more indirectly related but relevant as they are regarding the drug, signaling pathway, tissue type, dependent measure, and/or disease you are interested in.

    Science has for many years operated on a “what have you done for me lately” model, so probably the best a publication can hope for at this point is to be included in a review, to increase the chances that the findings it communicated will ever be noticed again after its 5 year citation window.

    Like

  26. Dr Strangely Strange Says:

    Sorry DM and rest of community if I offended anyone with previous post….

    Like

  27. gingerest Says:

    One major journal in my field limits Original Articles to 30 references (“suggested”, but I try never to let an editor find an excuse to bounce my paper unreviewed). Pinko’s right – for the quick overview of the giant field right next to your research question, nothing beats the up-to-date and authoritative review article. “Although bunnies have been shown repeatedly to have poofy tails and soft paws (Oldwhiteguy and Otheroldwhiteguy, Current Opinion in Bunny Science 2013;20:201-215), no studies to date have addressed the association between ear floppiness and bunnitude.”

    Like

  28. anonymous X Says:

    This is so petty. Let’s do better science and write better papers instead of trying to scrape up another 6 citations.

    Like

  29. Grumble Says:

    ” I’m sure I’m contributing to the death of science, boo hoo”

    No, you’re not.

    I, for one, see no reason to stop citing review articles, even in the specific scenarios DM mentions. If 6 original articles make a similar point, and that point is summarized nicely in a review, I’m going to cite the review, and not always cite the 6 papers. Other people will do this too, and you know what? Sometimes they’ll cite my review instead of my original papers. Or someone else’s review instead of my original papers. I really don’t care.

    I don’t care because what counts is not how many people cite my papers, but how many people I’ve influenced with my work. I’ve always felt that if I write a well-written review that summarizes years of work (my own and others’), THAT is what is going to really get people to understand my point of view about what my work means. So they are free to cite it, and by corollary, I can hardly complain if they cite other people’s influential reviews, too.

    Like

  30. dsks Says:

    “I don’t care because what counts is not how many people cite my papers, but how many people I’ve influenced with my work.”

    Unfortunately, an increasingly important metric by which hiring and tenure committees establish a scientist’s influence is looking at their citation numbers.

    “If 6 original articles make a similar point, and that point is summarized nicely in a review, I’m going to cite the review, and not always cite the 6 papers.”

    Well, it’s a matter of judgement isn’t it. 6 papers published 20 yrs ago regarding well-established facts of the field published by individuals who are now at the peak of their careers? Yeah maybe then, what the hell, reference a good and current review (although in the days of the interweb is it reallyso costly to do as DM suggests?)

    But based on current metrics of productivity and influence – whether right or wrong, they are what they are – choosing to cite a review instead of the recent work of a current postdoc or early independent investigator is just plain careless and rude.

    Like

  31. anonymous postdoc Says:

    I agree wholeheartedly with Grumble’s position, and advocate the publication and citation of well-written reviews.

    That said, I will now offer a devil’s advocate point of view based on rereading the original post, where DM complains about how terrible it is for the original research author to not be directly cited. In certain cases, it actually can benefit them.

    The propagation of “me-too” review articles, sometimes themselves citing review articles, can create the impression of a great deal of consensus and data on a topic/field which is in fact still very much in the “stabs in the dark” stage.

    This can become apparent if, by following the rabbit hole of citations, you find that many publications speaking of the significance of a given mechanism to bunny hopping in fact all depend on a very small set of glam to semi-glam publications, often from the same 1-2 labs. If one finds oneself in the position to, say, befriend the postdocs from these labs, one might find that the stories are far less pretty than the glam publications, and subsequently the slew of review articles, are making it appear.

    In these situations, short of having the resources and clout to attempt to publish unpopular experiments demonstrating variability and equivocation, the best a smaller operation can do is point out the holes…in review articles.

    Like

  32. drugmonkey Says:

    Are you arguing the proliferation of me-too reviews is good???!

    Twerck that!

    I follow more than one topic area at the moment where the number of primary works is overshadowed by review articles. It is maddening.

    Stop writing reviews and do some research people!!!!

    Like

  33. drugmonkey Says:

    There should be a rule that you can’t write a review unless you’ve published three original research papers in that topic/area of focus.

    Also a rule that your total number of review articles cannot surpass your original research articles.

    Like

  34. Jim Woodgett Says:

    Think of the poor, good-hearted, vicarious reviewer who can turn around the field that ze has studiously observed for many years from afar. I’m sure they only want to help….

    The real problem here is that people are citing reviews that aren’t adding anything to the primary papers. Such aggregator/laundry list reviews should be left to wither in the citation desert. Instead, people too lazy to read the primary literature cite the first review on the PubMed search (heck, preselect filter: Review in the search and you’re done).

    I must admit, I don’t spend a lot of time when reviewing manuscripts checking on their reference appropriateness. I focus on their data and their discussion of it.

    Like

  35. drugmonkey Says:

    (Ps. Yes and I *know* the funding picture is to blame. People are writing these reviews to look like they are players while they try to round up the grant funding. That is also maddening and I wish the $$$ would shake out of the tree for these people)

    Like

  36. Walter Says:

    “Stop writing reviews and do some research people!!!!”

    How much research do you get done considering you have 65k twitter posts (do the math on how much of your week is spent on twitter) and seem to spend most of the work day on social media blogging?

    Your time would be better spent writing a review, then bitching about others that write them.

    Like

  37. drugmonkey Says:

    Wow it must take you a long time to compose a simple sentence Walter.

    Like

  38. Walter Says:

    You’re not just composing a sentence. You’re reading other peoples tweets, retweeting, commenting, etc. Heck, you even end up blogging about what you read on twitter…

    When did you launch your twitter page? Even if you launched it on day one of twitter, with over 66k posts over 7 years, that’s still 25 posts per day (assuming 365 days / year). If we look at most of your posts, they seem to be during the work day (which you are presumably being paid by NIH funds?)- already today you probably have over 100.

    Even if we assume 30 seconds each (very conservative)…how much time of tax payer/research dollars do you waste on twitter?

    The fact that you comment/retweet all day means you’re monitoring it non-stop as well.

    Like

  39. Grumble Says:

    Wow, who knew? I always thought DM was an actual monkey. Turns out he’s a bird!

    Like

  40. drugmonkey Says:

    The fact that you comment/retweet all day means you’re monitoring it non-stop as well.

    It does? I think you may want to redo your analysis. it fails even the most cursory check under peer review.

    Even if we assume 30 seconds each (very conservative)

    it takes you 30 seconds to push the RT button? wow. you might want to get that looked at.

    Like

  41. drugmonkey Says:

    Dude, aren’t you the one always arguing that it is better for science if you publish in faster, low IF journals and then synthesize your brilliant ideas in a review???

    Nope. I am the one arguing for publishing your data. Then, if you absolutely insist on this “complete story” nonsense, that is when I suggest that your alternative is to write a review if you think your peers cannot put it all together for themselves from your Discussion sections.

    It is your unquestioning adherence to the fantasy of “complete story” that leads you to conflate these two things in your mind as being my argument.

    Like

  42. Walter Says:

    I thought you knew some statistics being a researcher…

    That 30 sec estimate assumes a distribution of 5 sec read/retweet and 1 min read, retweet, respond. Plus you read other tweets I’m assuming that aren’t retweeted, factor that time in.

    But you keep wasting 2 hours a day arguing on twitter, it will allow you the resources to find someone other then yourself to blame when you don’t get funding.

    Like

  43. drugmonkey Says:

    You read slowly too? Guess I should’ve guessed that.

    Like

  44. Walter Says:

    Insulting me, one of your readers, doesn’t help defend your position very well. In fact it’s quite childish. I pointed to hard numbers that suggest you waste a lot of time on twitter. Rather than just accept it and own up to it, you try to deflect by attacking me.

    It’s funny that you require to post under anonymity, though I’m sure if your identify was revealed and colleagues realized how you act when you think no one is looking it would change your tone (and level of tweeting).

    Like

  45. drugmonkey Says:

    Many of my colleagues, local and subfield, are full aware of who I am and have been for years. None have suggested I change my tone.

    Your “hard numbers” depend entirely upon your assumptions on how long it takes to do things like read a tweet and hit the RT. and yes, write a sentence or two. I laugh at you because you either have never done it or are *incredibly* slow.

    Like

  46. drugmonkey Says:

    And Walter. dude. The irony. You are pissed about my comments on review articles and instead of addressing the merits you do what?

    Like

  47. Walter Says:

    Well I agree with the point you make in this post, so I can’t argue with you here about anything. I took to your comment that reviews are a waste of time compared to real science. I would argue that tweeting/blogging is a waste of time compared to focused reviews and/or real science!

    Besides, I’m an anonymous blog reader, I’m not committed to sticking on topic…

    Like

  48. jebyrnes Says:

    Three responses. One – you should always ask yourself, what is the most appropriate citation here? Sometimes it really is a review, as they tie together many disparate threads, and deserve the citation credit! Two, if you are citing a long list of facts about something and a journal has a strict citation limit, be pragmatic and use a review so that you can cite more relevant specific pieces later and not piss someone off. Three – so, I’m curious, where do you view meta-analysis in all of this?

    Like

  49. drugmonkey Says:

    Meta analysis != “review”

    Like

  50. Brugg Says:

    We should start using Supplemental References, to allow citations beyond the 25 or 50 most important/relevant to the manuscript. Already we allow Supplemental Results/Data and Supplemental Methods, so why not refs?

    Like

  51. DM Says:

    Supplemental Refs aren’t indexed in ISI or Google Scholar for one reason….

    Like


Leave a comment