Thoughts on the Least Publishable Unit
October 19, 2007
A reader dropped the blog an email note which, among other things, was interested in a discussion of the concept of “least publishable unit”or LPU.
Apparently this concept is popular enough that Wikipedia has an entry on the “LPU“:
In academic publishing, the least publishable unit (LPU) is the smallest amount of information that can generate a publication in a peer-reviewed journal. The term is often used as a joking, ironic, or sometimes derogatory reference to the strategy of pursuing the greatest quantity of publications at the expense of their quality. … There is no consensus among academics about whether people should seek to make their publications least publishable units.
I’m not sure I can add much to a defense of the LPU approach published in the Chronicle some time ago. I agree with much of it. My view boils down to one essential concept which is that as an empirical scientist, one’s bread and butter work product is the original observation published in a peer reviewed archival journal. Period. If you are not publishing you are not doing science. I am consequently a bit hard line with the trainee who says “The evil PI won’t let me publish” or “Well, I’m just not getting results”, etc. Part of the training/learning process, true, and a lack of publishing has very different consequences for grad student, postdocs, 2nd postdocs and jr. faculty. Nevertheless my response is that the trainee has to learn how to publish. How to navigate the PIs concerns, conflicting motivations (legitimate difference in the High Impact / Just Publish Already ratio), bias or inattention. How to work specifically toward a publication with what ya got, rather than continuing a monolithic thesis-proposal plan that is getting bogged down, when to cut losses with a “methods” paper or slipping some new data into a review article. How to deal with competing lab members. Etc.
The critical issue for the scientist is, of course, should one publish LPU papers? What fraction of ones output can be LPU? Does it even matter whether they are “substantive” or “LPU” if the journal Impact Factor is such an overriding concern?
I’ve previously discussed the fact that scientific productivity can come up at grant review and one of the interesting things is that there are no objective standards to reference. The most critical issue would be for a competing continuation application in which reviewers are trying to see if the last competing funding interval has been sufficiently “productive”. One reviewer might say “great production” where another just shrugs “not particularly impressive”.
Reviewers tend not to have any data-based standards on which to rely but Druss and Marcus (co-authors of the Nath et al 2006 on retraction patterns that I cannot cite enough) had a 2005 paper titled “Tracking publication outcomes of National Institutes of Health grants”
On average, each grant produced 7.58 MEDLINE manuscripts (95% confidence interval [CI]: 7.47 to 7.69) and 1.61 publications in a core journal (95% CI: 1.56 to 1.65). A total of 6.4% of R01 grants were not associated with any publications
during the study period. These values are weighted by the number of grants funding the publication to prevent double-counting of papers.
Among new, 5-year grants funded in 1996, the total number and the number of publications in core journals peaked during the final year of the funding period, and decreased steadily in the 4 years after the completion of the grant … Manuscript output for basic science grants peaked during years 4 and 5 of the grant, whereas the clinical grants peaked during year 5 of the grant and during the subsequent 4 years.
So at least there is that.
Getting back to the individual CV, however, I can offer my rules of thumb. The usual caveats will apply in that the most important thing is to understand the expectations in your particular subfields. Even those will vary depending on the context (grant review? promotion? job-seeking?). Our correspondent wondered is it better to have 10 substantive pubs or 30 LPUs (relevant to tenure decision) and I’m not sure I can venture a definitive answer on that although I have some thoughts related to the consistency of output and decision tradeoffs below. I think the most general advice is to scrutinize your CV for balance with an eye to what “pops out”. Are you below average for your area or department on total (or first/senior author) pubs? Well for sure you are going to lean toward the LPU approach. Is your CV almost exclusively listing journals which are at the lower end of the basic expectation for your field? Well, you may need to lean toward that slight upgrade of IF approach. Are your pubs all straightforward descriptive experiments? Maybe you need to work on something a little more mechanistic for now.
Publication Rate: Very roughly, at least a paper per year can be considered a reasonable starting target. A correlary is that while a gap of a year may be no big deal, the longer the gap in “Publication Years” the worse it looks. The strategic considerations are clear. First, you need to keep your submission stream as regular as possible to account for delays in review and the lengthy time-to-pub for many journals. A manuscript submitted to many journals after about July is very likely to end up as the next year’s “publication year”, even if it does come out on the pre-print archive. Remember, your CV is going to be judged in retrospect and it is not clear that anyone will pay attention to “Epub 200x…” over the formal publication year. Second, in order to keep a regular publication rate, well sometimes you may have to move more toward LPU than you would otherwise desire. Third, this suggests that it will be valuable to construct your research program such that you have some steadily publishable data stream always ticking away.
Now we get to the caveats and tradeoffs which make things interesting. I offer two important tradeoffs for you to consider.
First is the Publication Rate vs. Scientific Quality tradeoff that is more relevant to tenure/hiring than to grant review and is the obsession of old school Ivory Tower type departments and individuals. This is that nebulous quality of senior people in your field thinking each paper of yours is a truly significant undertaking with depth and breadth. These are the types that are most likely to use the actual term “LPU” in a derogatory manner when judging your CV. While in principle I am sympathetic this approach is just not compatible with NIH funding cycles. And therefore my concept of the modern biomedical science career. As mentioned above, grant review considers “productivity” and it is a lot easier to agree on “8 papers in three years” than it is to agree “well, this one paper was particularly significant because…”. Even discussion such as “Well yes this very senior lab in Year 20 has only been getting about 1 per year from this grant but each one is chock-full of experiments….” can fall a little flat. Subjective evaluations can be doubted, objective numbers like pubs-per-year cannot. Not to mention that in many (most?) cases reviewers (and search / promotion committee members) may be too lazy to get much beyond the box score. It is particularly important for younger and transitioning scientists because there is in some senses less expectation that the person will be generating Big and Significant papers. So the analysis stops with “how many first author? how many senior author? what Impact Factor journals?”. Of course, if you are in a mainly teaching department with little focus on research support, you will want to stay away from LPU in pursuit of really big and important papers.
The second is Publication Rate vs. Impact Factor tradeoff which is a more modern concern in which the much-discussed Impact Factor of the journals you publish in taken as the proxy for “Scientific Quality”. There is usually a direct relationship between IF and the amount of work required and range of data included. Therefore shooting for a higher IF publication is generally going to be detrimental to your Publication Rate. However, if the upgrade in IF is sufficient, it can account for less frequent production. This is where it gets tricky and highly field dependent. For sure C/N/S publications are acknowledged to be hard to attain and I can imagine that if one had a sustained output of these pubs (as first and senior author, depending on your level) one could probably publish once every three years and get away with it. (Kind of like that Nature Medicine 20 pubs lifetime suggestion.) From my perspective it can be very risky. I’ve seen at least one colleague work for over 5 years now trying to get a hit with a very high Impact journal and publishing very little else- it will be interesting to see if this PI gives it up eventually or suffers horrible consequences or succeeds and never looks back. Balance is my advice.
Below the very highest impact factor journals, well, I just don’t see where you would want to give up much in terms of consistent output in pursuit of a couple of IF points. I might look at it like this in a typical example. Suppose your bread and butter LPU papers would go in the 2-4 IF range journals. A 6-8, 8-10 IF range journal upgrade is likely significant in your career evaluation so you are thinking (right?) about ways to get one of these every “once in a while”. This might translate to 1 of 4 per year, one every 3 years, etc, depending on your area/expectations. The sort of tradeoff that might be okay would be reducing one year’s output by a couple of pubs, absorbing a one-year gap, that sort of disruption. A 3-4 year gap to publish in J of Neuroscience?, well, not so many areas I can think of where that would be a good idea. There is another consideration too. In my work for example, which matches the above considerations in broad strokes, the “upgrade” doesn’t really result from a compromise in output rate. Rather it would result from the confluence of a particular area of work in the lab, the empirical outcome of really cool results and the coincidence of nebulous “hotness” of a given area. In short, it wouldn’t really require hard tradeoff choices in my work, but would rather be a sort of emergent property such as “Hey, I think this particular story we’re working on has a shot of going higher than usual if we just do a couple of things”.
Exceptions: There will always be reasons why you have suspiciously low pub numbers for some reason or other in a given interval. A graduate school lab or postdoc that just didn’t work out. Major illness for yourself or family. Child bearing/rearing. Pinworm wipes out animal colony. Katrina. Lost a grant for awhile. Lost vote and had to chair the dept for three years! My advice is to just be as upfront and transparent as possible. People who are reviewing your CV are not going to overlook the gaps and weaknesses- and they are going to be curious as to what was going on. Confusion or “things not adding up” in the mind of a reviewer is a bad thing. In grant review, the central question is (or should be) “How does the prior history predict the success of the current proposal?”. The point of the exercise should not be punitive (in which case you might be inclined to conceal things in the past) but rather predictive. Unfortunately you don’t really want to highlight weaknesses if you don’t have to. So in my mind this becomes a matter of waiting for your inevitable response to critique. Once a weakness has been noted, that’s when you need to come in with the full explanation. I will say that I have seen lack of production due to usual factors such as child bearing/rearing, a training advisor leaving the institute/science unexpectedly, natural disaster, lab disaster etc, be treated very sympathetically-once the explanation was provided. On the other hand, I’ve seen generic responses such as “gee, we’re trying to get our manuscripts out” go over very poorly. To return to a common theme of mine, all the advocate needs is something to work with.
Specific to the grant submission, there may be situations in which your CV weaknesses are so obvious StockCritique bait that you’ll want to address them in the -01. How? Well, I’d say one place is in the preliminary data (intro or summary paragraph) and the final paragraph on the Specific Aims page. These are the places where you will be drawing together the bragging narrative about how great you are as PI, how fantastic the environment and why this confluence is the only possible place the work could possibly be done. This is where you may want to consider slipping in comments such as the following (no, these are not quotes but I’ve seen (and used) similar).
“Although the PI’s mentor was distracted with a tenure fight and finding a new job, thus resulting in few publications from graduate training, the PI was running the research program as a graduate student, including blah, blah, blah. This experience has been invaluable in the technical aspects of running the recently established program…”
“Since returning from maternity leave the PI has focused her full attention on …”
“Scientific production was hampered by a murine pathogen in the colony requiring re-derivation…. These problems have been addressed by X, Y and we are back to Z status of our resources…”