RePost: Thoughts on the Least Publishable Unit

January 22, 2009

A recent post by ScienceWoman in response to a reader query from Mrs. Comet Hunter reminded me of this post I put up on the old blog Oct 19, 2007.


A reader dropped the blog an email note which, among other things, was interested in a discussion of the concept of “least publishable unit”or LPU.
Apparently this concept is popular enough that Wikipedia has an entry on the “LPU“:

In academic publishing, the least publishable unit (LPU) is the smallest amount of information that can generate a publication in a peer-reviewed journal. The term is often used as a joking, ironic, or sometimes derogatory reference to the strategy of pursuing the greatest quantity of publications at the expense of their quality. … There is no consensus among academics about whether people should seek to make their publications least publishable units.


I’m not sure I can add much to a defense of the LPU approach published in the Chronicle some time ago. I agree with much of it. My view boils down to one essential concept which is that as an empirical scientist, one’s bread and butter work product is the original observation published in a peer reviewed archival journal. Period. If you are not publishing you are not doing science. I am consequently a bit hard line with the trainee who says “The evil PI won’t let me publish” or “Well, I’m just not getting results”, etc. Part of the training/learning process, true, and a lack of publishing has very different consequences for grad student, postdocs, 2nd postdocs and jr. faculty. Nevertheless my response is that the trainee has to learn how to publish. How to navigate the PIs concerns, conflicting motivations (legitimate difference in the High Impact / Just Publish Already ratio), bias or inattention. How to work specifically toward a publication with what ya got, rather than continuing a monolithic thesis-proposal plan that is getting bogged down, when to cut losses with a “methods” paper or slipping some new data into a review article. How to deal with competing lab members. Etc.
The critical issue for the scientist is, of course, should one publish LPU papers? What fraction of ones output can be LPU? Does it even matter whether they are “substantive” or “LPU” if the journal Impact Factor is such an overriding concern?
I’ve previously discussed the fact that scientific productivity can come up at grant review and one of the interesting things is that there are no objective standards to reference. The most critical issue would be for a competing continuation application in which reviewers are trying to see if the last competing funding interval has been sufficiently “productive”. One reviewer might say “great production” where another just shrugs “not particularly impressive”.
Reviewers tend not to have any data-based standards on which to rely but Druss and Marcus (co-authors of the Nath et al 2006 on retraction patterns that I cannot cite enough) had a 2005 paper titled “Tracking publication outcomes of National Institutes of Health grants

On average, each grant produced 7.58 MEDLINE manuscripts (95% confidence interval [CI]: 7.47 to 7.69) and 1.61 publications in a core journal (95% CI: 1.56 to 1.65). A total of 6.4% of R01 grants were not associated with any publications
during the study period. These values are weighted by the number of grants funding the publication to prevent double-counting of papers.

Among new, 5-year grants funded in 1996, the total number and the number of publications in core journals peaked during the final year of the funding period, and decreased steadily in the 4 years after the completion of the grant … Manuscript output for basic science grants peaked during years 4 and 5 of the grant, whereas the clinical grants peaked during year 5 of the grant and during the subsequent 4 years.

So at least there is that.
Getting back to the individual CV, however, I can offer my rules of thumb. The usual caveats will apply in that the most important thing is to understand the expectations in your particular subfields. Even those will vary depending on the context (grant review? promotion? job-seeking?). Our correspondent wondered is it better to have 10 substantive pubs or 30 LPUs (relevant to tenure decision) and I’m not sure I can venture a definitive answer on that although I have some thoughts related to the consistency of output and decision tradeoffs below. I think the most general advice is to scrutinize your CV for balance with an eye to what “pops out”. Are you below average for your area or department on total (or first/senior author) pubs? Well for sure you are going to lean toward the LPU approach. Is your CV almost exclusively listing journals which are at the lower end of the basic expectation for your field? Well, you may need to lean toward that slight upgrade of IF approach. Are your pubs all straightforward descriptive experiments? Maybe you need to work on something a little more mechanistic for now.
Publication Rate: Very roughly, at least a paper per year can be considered a reasonable starting target. A correlary is that while a gap of a year may be no big deal, the longer the gap in “Publication Years” the worse it looks. The strategic considerations are clear. First, you need to keep your submission stream as regular as possible to account for delays in review and the lengthy time-to-pub for many journals. A manuscript submitted to many journals after about July is very likely to end up as the next year’s “publication year”, even if it does come out on the pre-print archive. Remember, your CV is going to be judged in retrospect and it is not clear that anyone will pay attention to “Epub 200x…” over the formal publication year. Second, in order to keep a regular publication rate, well sometimes you may have to move more toward LPU than you would otherwise desire. Third, this suggests that it will be valuable to construct your research program such that you have some steadily publishable data stream always ticking away.
Now we get to the caveats and tradeoffs which make things interesting. I offer two important tradeoffs for you to consider.
First is the Publication Rate vs. Scientific Quality tradeoff that is more relevant to tenure/hiring than to grant review and is the obsession of old school Ivory Tower type departments and individuals. This is that nebulous quality of senior people in your field thinking each paper of yours is a truly significant undertaking with depth and breadth. These are the types that are most likely to use the actual term “LPU” in a derogatory manner when judging your CV. While in principle I am sympathetic this approach is just not compatible with NIH funding cycles. And therefore my concept of the modern biomedical science career. As mentioned above, grant review considers “productivity” and it is a lot easier to agree on “8 papers in three years” than it is to agree “well, this one paper was particularly significant because…”. Even discussion such as “Well yes this very senior lab in Year 20 has only been getting about 1 per year from this grant but each one is chock-full of experiments….” can fall a little flat. Subjective evaluations can be doubted, objective numbers like pubs-per-year cannot. Not to mention that in many (most?) cases reviewers (and search / promotion committee members) may be too lazy to get much beyond the box score. It is particularly important for younger and transitioning scientists because there is in some senses less expectation that the person will be generating Big and Significant papers. So the analysis stops with “how many first author? how many senior author? what Impact Factor journals?”. Of course, if you are in a mainly teaching department with little focus on research support, you will want to stay away from LPU in pursuit of really big and important papers.
The second is Publication Rate vs. Impact Factor tradeoff which is a more modern concern in which the much-discussed Impact Factor of the journals you publish in taken as the proxy for “Scientific Quality”. There is usually a direct relationship between IF and the amount of work required and range of data included. Therefore shooting for a higher IF publication is generally going to be detrimental to your Publication Rate. However, if the upgrade in IF is sufficient, it can account for less frequent production. This is where it gets tricky and highly field dependent. For sure C/N/S publications are acknowledged to be hard to attain and I can imagine that if one had a sustained output of these pubs (as first and senior author, depending on your level) one could probably publish once every three years and get away with it. (Kind of like that Nature Medicine 20 pubs lifetime suggestion.) From my perspective it can be very risky. I’ve seen at least one colleague work for over 5 years now trying to get a hit with a very high Impact journal and publishing very little else- it will be interesting to see if this PI gives it up eventually or suffers horrible consequences or succeeds and never looks back. Balance is my advice.
Below the very highest impact factor journals, well, I just don’t see where you would want to give up much in terms of consistent output in pursuit of a couple of IF points. I might look at it like this in a typical example. Suppose your bread and butter LPU papers would go in the 2-4 IF range journals. A 6-8, 8-10 IF range journal upgrade is likely significant in your career evaluation so you are thinking (right?) about ways to get one of these every “once in a while”. This might translate to 1 of 4 per year, one every 3 years, etc, depending on your area/expectations. The sort of tradeoff that might be okay would be reducing one year’s output by a couple of pubs, absorbing a one-year gap, that sort of disruption. A 3-4 year gap to publish in J of Neuroscience?, well, not so many areas I can think of where that would be a good idea. There is another consideration too. In my work for example, which matches the above considerations in broad strokes, the “upgrade” doesn’t really result from a compromise in output rate. Rather it would result from the confluence of a particular area of work in the lab, the empirical outcome of really cool results and the coincidence of nebulous “hotness” of a given area. In short, it wouldn’t really require hard tradeoff choices in my work, but would rather be a sort of emergent property such as “Hey, I think this particular story we’re working on has a shot of going higher than usual if we just do a couple of things”.
Exceptions: There will always be reasons why you have suspiciously low pub numbers for some reason or other in a given interval. A graduate school lab or postdoc that just didn’t work out. Major illness for yourself or family. Child bearing/rearing. Pinworm wipes out animal colony. Katrina. Lost a grant for awhile. Lost vote and had to chair the dept for three years! My advice is to just be as upfront and transparent as possible. People who are reviewing your CV are not going to overlook the gaps and weaknesses- and they are going to be curious as to what was going on. Confusion or “things not adding up” in the mind of a reviewer is a bad thing. In grant review, the central question is (or should be) “How does the prior history predict the success of the current proposal?”. The point of the exercise should not be punitive (in which case you might be inclined to conceal things in the past) but rather predictive. Unfortunately you don’t really want to highlight weaknesses if you don’t have to. So in my mind this becomes a matter of waiting for your inevitable response to critique. Once a weakness has been noted, that’s when you need to come in with the full explanation. I will say that I have seen lack of production due to usual factors such as child bearing/rearing, a training advisor leaving the institute/science unexpectedly, natural disaster, lab disaster etc, be treated very sympathetically-once the explanation was provided. On the other hand, I’ve seen generic responses such as “gee, we’re trying to get our manuscripts out” go over very poorly. To return to a common theme of mine, all the advocate needs is something to work with.
Specific to the grant submission, there may be situations in which your CV weaknesses are so obvious StockCritique bait that you’ll want to address them in the -01. How? Well, I’d say one place is in the preliminary data (intro or summary paragraph) and the final paragraph on the Specific Aims page. These are the places where you will be drawing together the bragging narrative about how great you are as PI, how fantastic the environment and why this confluence is the only possible place the work could possibly be done. This is where you may want to consider slipping in comments such as the following (no, these are not quotes but I’ve seen (and used) similar).
“Although the PI’s mentor was distracted with a tenure fight and finding a new job, thus resulting in few publications from graduate training, the PI was running the research program as a graduate student, including blah, blah, blah. This experience has been invaluable in the technical aspects of running the recently established program…”
“Since returning from maternity leave the PI has focused her full attention on …”
“Scientific production was hampered by a murine pathogen in the colony requiring re-derivation…. These problems have been addressed by X, Y and we are back to Z status of our resources…”

No Responses Yet to “RePost: Thoughts on the Least Publishable Unit”

  1. Alex Says:

    Sometimes a least publishable unit can even be an important piece of work. I’m a theoretical physicist, I did a couple of calculations recently, those calculations turned out to be pretty important, and I got them into a pretty good journal. There’s no way I could have made any sort of meaningful paper out of them if I had divided the work into 2 papers, because the results sort of had to go together to make sense, and as it is the paper was only 3 pages. So in that sense it’s an LPU. But in terms of Impact Factor, it’s a lot better than assembling the smallest smidgeon of results and sending them to the lowest-tier journal possible just to get a paper out.

    Like

  2. nm Says:

    We’re trying a new theory in our clinical research department. We’re designing big important studies with the upfront quality assessment that if they ‘work’ they will be good enough to have a genuine shot at NEJM JAMA or Lancet.
    In-between these big studies smaller papers naturally fall out anyway as people realise secondary hypotheses using the same data sets.
    We also figure that a NEJM paper once every three years is more than sufficient output but that shhoting that high is very risky and we’d better have sustained output in our top sub-specialty journals while we’re at it.
    Don’t know if this is going to work as we’re only just really starting…

    Like

  3. Dave Says:

    Advice I have heard from prominent HHMI scientist:
    When you’re starting out, publish anything and everything to show you can get stuff done. Don’t hold out for the awesome Nobel-worthy Cell paper at the risk of coming up for grant review or tenure with no publications. Most tenure committee members won’t know the difference between Cell and Journal of Erratic Observations anyway. And most grant reviewers recognize that not everyone is a Glamourmag God.
    But eventually make sure you get an ocassional significant high-profile excellent paper, to demonstrate that you can indeed do excellent important groundbreaking science. That way, when people count your publications, they’ll be able to note that your science seems to be well-respected as well as plentiful.
    I guess this advice translates to: LPU when you can, but don’t get a reputation for it.

    Like

  4. pinus Says:

    I have been given the same advice by prominent scientists in my field.
    At first…just show you can do it yourself, that 1st pub on your own can be rough (this is what I hear at least!!!)..then once you can show you can publish on your own..start working on some more interesting high profile stuff.

    Like

  5. S. Rivlin Says:

    I hate the idea of LPU. I think that in a way this practice aims at creating a false image of a productive scientist simply based on the count of her publications, with no relation to the content of each of her publications. The interesting thing is that we all know what the LPU stands for yet, we accept it, especially on a CVs, as a reliable indicator of one’s quality of productivity. Even publishing a LPU in a high IF journal does not guarantee a high impact paper. In my CV I always included in paretheses by each publication the number of citations it received. One should always be impressed more by a paper in Neuroscience, 2004 that has received as of December, 2008, 100 citations than by three publications in that received as of December, 2008, a total of 20 citations.

    Like

  6. S. Rivlin Says:

    For some reason, in my previous comment the sentence “One should always be impressed more by a paper in Neuroscience, 2004 that has received as of December, 2008, 100 citations than by three publications in that received as of December, 2008, a total of 20 citations.” was truncated. It shoud read:
    One should always be impressed more by a paper in Neuroscience, 2004 that has received as of December, 2008, 100 citations than by three publications in J. Neurosci, 2003,2004,2005 that received as of December, 2008, a total of 20 citations.

    Like

  7. Dave Says:

    The problem with relying on citations, Sol, is that you can be a review-writer or methods hack in a popular field and get oodles of citations, or a true brilliant pioneer in a nascent or less populated field and get few citations. In other words, citation number can be a poor measure of
    …whatever people who are looking at your CV are trying to measure. Which is something else we can argue about.

    Like

  8. nm Says:

    Citation counts, much like the h-factor, also unfairly favour older scientists regardless of quality. Great for Sol, bad for me.
    But then again if I had a paper cited 100 times I’d make that clear too. Until then Impact Factor is all I’ve got to indicate quality to people unfamiliar with my work.

    Like

  9. pinus Says:

    I have had a paper cited over 150 times (it is only 4 years old).
    This may be a naive question…but should I make a note of that in my CV?

    Like

  10. Dave Says:

    CVs are a whole ‘nuther topic worth discussing. I’m curious myself how people respond to certain forms of these documents.
    NM,
    If I received the CV of someone who pointed out that a paper of theirs had been cited 150 times, I’d wonder about the other papers that didn’t have a similar number attached. If it were the CV of a job candidate or something I might check the papers out (with Google Scholar), or I’d just forego the effort and assume that the citation rates of everything else were not mentioned because the CV author is basically a one-hit wonder looking to cover that fact. In which case I’d write off the whole CV as an untrustworthy fluff document produced by an undiscerning braggart. If your paper is really awesome, people will recognize it. If they don’t, I don’t think you’ll get ahead trying to convince readers of your CV that they are ignorant nonexperts (even if they may be).
    I also react negatively when I see lots of semi-bogus ‘Awards and Honors’, or one’s dissertation or SFN abstracts listed as a publications. That kind of stuff just makes me question other probably quite valid accomplishments that are listed with less hype, and form the impression that the CV-writer’s ego might be intolerable.
    Better to be straightforward and humble, I think, in your documents.

    Like

  11. DrugMonkey Says:

    Oh yes, I’m totally in favor of putting actual citations. For all papers and of course divided by the impact factor of the journal when it was published. Feel free to call that the d-index….

    Like

  12. pinus Says:

    I need to make some sort of index that penalizes people older than me….some sort of ‘greying’ factor could be incorporated..(it would be negative of course)
    For reference, I have never put in any citation count for any paper. I have never seen anybody at my level actually do that. However, my grad school PI did have a few said notes on his CV.

    Like

  13. qaz Says:

    Although I was taught to disdain LPU and to publish full-length, complete results (think old school dozen+ page JNeurophys articles), one of the things I’ve found is that people reading articles can only seem to glean one thing from any article. So if you’ve got three major results in one article, two of them are going to get lost. Especially with high-impact journals (N/S, I don’t read C it’s out of my field), you only have room for one clean, result. It’s made me rethink some of the LPU issues. I still think there is an issue with slicing too thin, but I’ve been finding it harder to see these days.
    PS. Number of citations is an terribly unreliable way to identify impact. There was a guy in my old field that published a lot of papers, a lot of them in his own journal (yes, it’s listed in pubmed), and cited himself and his students in a high proportion in his own papers. Needless-to-say, his citation number did not reflect his impact. Because citation practices differ greatly between fields, citation numbers are meaningless to compare. (And when I say fields, I don’t just mean biology and math, I mean electrophysiology and protein genetics.)

    Like

  14. Mike_F Says:

    This discussion reminds me of a comment from my Ph.D. mentor (may he rest in peace) when I left for postdoc. “Remember, quality is important, but it’s aways better to have plenty of quality…”

    Like

  15. scicurious Says:

    Thanks for this, it’s really good to see this kind of advice pretty well aggregated to one place. I know I mostly receive it in snippets, and then it’s very hard to remember the relevent information when the time comes.
    I was wondering, what ARE the rules for slipping new data into a review? I was under the impression that that was Not Done, but I have seen it in a few places.

    Like

  16. S. Rivlin Says:

    pinus, I always include the number of citations of all my papers, including those that receive only 1 citation or no citations. Including the number of citations requires updating them at least once a year. Of course, more recent papers, in general, will have less citations than older ones. The graying factor is bogus since one should measure the age of the paper, not the age of the scientist. On the other hand, an old paper that continues to receive citations many years after its publication does indicates an important inpact.
    Also, we all need to remember that journals’ IF is based on number of citations.
    Dave, it is true that ‘review’ papers and ‘method’ papers do get, on average, more citations, but any CV reader should know to distinguish such papers from the rest of the list. As to ‘abstracts’ listed as papers, one should separate peer-reviewed papers from invited papers and also should list abstracts separately. Moreover, a CV that has a list of abstracts significantly longer than a list of peer-reviewed papers is a negative in my book.

    Like

  17. whimple Says:

    I was wondering, what ARE the rules for slipping new data into a review? I was under the impression that that was Not Done, but I have seen it in a few places.
    Why would you want to put new data in a review?

    Like

  18. DrugMonkey Says:

    I was wondering, what ARE the rules for slipping new data into a review? I was under the impression that that was Not Done, but I have seen it in a few places.
    Why would you want to put new data in a review?

    I see this not infrequently as a strategem to dump little bits of data that are not enough to stand as a paper. Why? Reasons varied no doubt but an essential component the prediction on the part of the authors that they will be unable to follow up or complete the work. Is it scientifically justified? Well, it does have to get through peer review like anything else (I’m referring to things that look very much like unsolicited submissions to me although one never knows of course). In several cases that I can think of I am very happy that the data ended up reported and understand quite clearly why they would never have been extended into “more”. Usually having to do with grant status, trainees moving on to new postings, etc. There are cases where I wonder why it justified a paper and I would have beat them up in review- ymmv, obviously.
    I have a reflexive distaste for the practice but the more of them I see and find to be useful in my work, the less I am justified in objecting. For those of us worried about the mass of data that just goes unreported for various reasons, this is a solution.

    Like

  19. pinus Says:

    Sol,
    Obviously the graying factor was a joke. I am more than willing to let my science do the talking, rather than an index.
    I guess I have to recalibrate my joke writing machine to be more obvious.
    sincerely,
    Science doing machine

    Like

  20. DrugMonkey Says:

    Moreover, a CV that has a list of abstracts significantly longer than a list of peer-reviewed papers is a negative in my book.
    This makes no sense whatsoever. A conference presentation is less than a paper in scope. Otherwise, why bother? So it makes sense that an active, engaged, nonparanoid scientist would have many, many more abstracts on the full-monty CV than result in eventual papers….right?
    The question of where/when you should be submitting your fullest of Monty CV is another question though. I guess for the most part I am assuming that (save trainees who are just starting anyway) abstracts are going to only be listed on a pretty full-bore CV, the one that details everything about your academic life…

    Like

  21. Dave Says:

    Sol: “Dave, it is true that ‘review’ papers and ‘method’ papers do get, on average, more citations, but any CV reader should know to distinguish such papers from the rest of the list.”
    This isn’t as easy as you make it sound. In my field, research papers often use new reagents (typically genotypes or antibodies) in the course of the study. For many of these sorts of papers, most subsequent citations are in methods sections by people who used the reagent but don’t necessarily care anything else about the original paper. Of course, we can argue that creation of widely-used methods and reagents is as important as introducing new knowledge and ideas, but…
    “As to ‘abstracts’ listed as papers, one should separate peer-reviewed papers from invited papers and also should list abstracts separately. Moreover, a CV that has a list of abstracts significantly longer than a list of peer-reviewed papers is a negative in my book.”
    I agree, and react the same way.
    I see that DM disagrees with the idea that too many abstracts should be a negative. As a grad student, I was told what Sol says (too many abstracts compared to publications is bad), and would also have been perplexed except it was explained why: Many abstracts without many papers suggests: 1) The author does not have the resources or intellectual wherewithal to follow-through on preliminary studies, and/or 2) The author’s work is generally unable to stand the test of peer-review.

    Like


  22. Other than grad students applying for post-docs, listing abstracts just decreases the signal-to-noise ratio. I never ever ever give even the slightest credence to abstracts in CVs I assess, and if they are there, it is just in the way.

    Like

  23. Dave Says:

    Yea, good point CPP. I put abstracts on my CV only because I need to catalog them for university promotion & tenure stuff, and my CV is a handy place to do that. Basically, I use my CV as a sort of dumping ground for professional minutae. It’s organized and readable, of course, but really there are very few times when a full CV is needed anyway, since granting agencies want specific styles (e.g. NIH Biosketch) or have strict page limits (NSF, many private organizations). The only time I usually unload my full CV on people is when some administrative dork requests it for something retarded (in which case the number of pages impresses them), or if it is requested by someplace I am giving a seminar, in which case the massive info dump usually leads to an introduction that is vague, but also awesomely succinct and flattering.

    Like

  24. Dave Says:

    I should add, given my comment about CV pages, that my CV is pretty condensed, with students I’ve had & journals I’ve reviewed for & stuff like separated by commas and relatively densely packed rather than on separate lines*, and everything is in 11 point font with 0.5 inch margins. Very organized and readable, but not designed to simply fill space. I hate CVs where it looks like the goal is simply to kill trees. Again — that makes me think the CV writer is just trying to fluff his stuff.
    [*Really, no one cares who the people or journals are anyway; they just want to know whether there is a lot or a little, or a wide variety and which types. A simple number would accomplish most of what one needs to convey, but listing names provides a bit more info and credibility than simply listing a number. So I do that. In as little space as possible.]

    Like


  25. I put abstracts on my CV only because I need to catalog them for university promotion & tenure stuff, and my CV is a handy place to do that.

    Interesting. My institution explicitly tells us not to put abstracts on our promotion/tenure CVs.

    Like

  26. Dave Says:

    CPP: Your institution obviously knows what sort of accomplishments are really worthwhile. I am jealous.

    Like

  27. JD Says:

    “Interesting. My institution explicitly tells us not to put abstracts on our promotion/tenure CVs.”
    When I was in Canada, the CIHR (the Canadian equivalent of the NIH) made you create two composite scores: the number of 1st author papers + published abstracts and the number of co-authored papers + published abstracts.
    As the summary number was often used this made me tend to include published abstracts as they were specifically requested information for a major granting agency metric. So I’ve tended to track them even here in the US but it might be a big cultural difference . . .

    Like

  28. S. Rivlin Says:

    DM, what dave said about many more abstracts in your CV than peer-reviewed papers. Especially if one has abstracts on certain topics without corresponding peer-reviewed papers on that topic.

    Like

  29. Pinko Punko Says:

    We always called them MPUs for Minimum Publishable Unit.

    Like

  30. nm Says:

    Surely a high abstracts:papers ratio is a sign that you can’t finish the job properly?

    Like

  31. DrugMonkey Says:

    Not at all. You seem to be assuming that each abstract should be a paper. Perhaps several posters’ worth of data funnel into one paper. Likewise a poster may be a place to present negative data and failed directions in some cases.

    Like

  32. qaz Says:

    What is the problem with abstracts? As long as no one counts them as papers? Abstracts are one of the ways that we communicate as scientists with each other. A lot of abstracts means that one is attending a lot of meetings. Nothing wrong with that. Not having enough papers – now that’s a different issue. But I don’t see how the number of abstracts you have should impact my judgment of whether you have enough papers.
    I should also note that this issue of what is an abstract becomes very complex as other fields become folded into neuroscience. In several fields (engineering and computer fields in particular), abstracts are peer-reviewed with a high rejection rate. One needs to know more about the conference involved than to simply say “abstracts bad”. Nevertheless, even if we are just talking about non-peer-reviewed abstracts (like SFN), having a lot of abstracts means that you are participating in meetings. That’s not a bad thing.

    Like

  33. drugmonkey Says:

    Hmm. I wonder whether the “glean one thing” needs to be thought about harder. We might have this reaction when *we* think every figure is the awesome but the field says “meh” to all but Fig 3.

    Like


Leave a comment