Raging at the descriptive as if it is prescriptive
November 23, 2020
A quick google search turns up this definition of prescriptive: “relating to the imposition or enforcement of a rule or method.” Another one brings up this definition, and refinement, for descriptive: “describing or classifying in an objective and nonjudgmental way….. describing accents, forms, structures, and usage without making value judgments.“
We have tread this duality a time or two on this blog. Back in the salad days of science blogging, it led to many a blog war.
In our typical fights, I or PP would offer comments describing the state of the grant-funded, academic biomedical science career as we see it. This would usually be in the course of offering what we saw as some of the best strategies and approaches for the individual who is operating within this professional sphere. Right now, as is, as it is found. Etc. For them to succeed.
Inevitably, despite all evidence, someone would come along and get all red about such comments as if we were prescribing, instead of describing, whatever specific or general reality we were describing.
Pick your issue. I don’t like writing a million grants to get the barest hope of winning one. I think this is a stupid way for the NIH to behave and a huge waste of time and taxpayer resources. So when I tell jr and not so jr faculty to submit a ton of grants this is not an endorsement of the NIH system as I see it. It is advice to help the individual to succeed despite the problems with the system. I tee off on Glam all the time….but would never tell a new PI not to seek JIF points wherever possible. There are many things I say about how NIH grant review should go, that might seem to contrast with my actual reviewer behavior for anyone who has been on study section with YHN. (For those who are wondering, this has mostly to do with my overarching belief that NIH grant review should be fair. Even if one objects to some of the structural aspects of review, one should not blow it all up at the expense of the applications that are in front of a given reviewer.) The fact that I bang on about first and senior authorship strategy for respective career stages doesn’t mean that I believe that chronic middle-author contributions shouldn’t be better recognized.
I can walk and chew gum.
Twitter has erupted in the past few days. There are many who are very angered by a piece published in Nature Communications by AlShebli et al which can be summarized by this sentence in the Abstract “We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors.” This was recently followed, in grand old rump sniffing (demi)Glam Mag tradition by an article by Sterling et al. in PNAS. The key Abstract sentence for this one was “we find women earn less than men, net of human capital factors like engineering degree and grade point average, and that the influence of gender on starting salaries is associated with self-efficacy“. In context, “self-efficacy” means “self-confidence“.
For the most part, these articles are descriptive. The authors of the first analyze citation metrics, i.e. “We analyze 215 million scientists and 222 million papers taken from the Microsoft Academic Graph (MAG) dataset42, which contains detailed records of scientific publications and their citation network”. The authors of the second conducted a survey investigation: “To assess individual beliefs about one’s technical ability we measure ESE, a five-item validated measure on a five-point scale (0 = “not confident” to 4 = “extremely confident,” alpha = 0.87; SI Appendix, section S1). Participants were asked, “How confident are you in your ability to do each of the following at this time?”:”
Quite naturally, the problem comes in where the descriptive is blurred with the prescriptive. First, because it can appear as if any suggestion of optimized behavior within the constraints of the reality that is being described, is in fact a defense of that reality. Intentional or unintentional. Second, because prescribing a course of action that accords with the reality that is being described, almost inevitably contributes to perpetuation of the system that is being described. Each of thse articles is a mixed bag, of course. A key sentence or two can be all the evidence that is needed to launch a thousand outraged tweets. I once famously described the NSF (in contrast to the NIH) as being a grant funding system designed for amateur scientists. You can imagine how many people failed to note the “designed for” and accused me of calling what I saw as the victims of this old fashioned, un-updated approach “amateurs”. It did not go well then.
The first set of authors’ suggestions are being interpreted as saying that nobody should train with female PIs because it will be terrible for their careers, broadly writ. The war against the second set of authors is just getting fully engaged, but I suspect it will fall mostly along the lines of the descriptive being conflated with the prescriptive, i.e., that it is okay to screw over the less-overconfident person.
You will see these issues being argued and conflated and parsed in the Twitter storm. As you are well aware, Dear Reader, I believe such imprecise and loaded and miscommunicated and angry discussion is the key to working through all of the issues. People do some of their best work when they are mad as all get out.
but…….
We’ve been through these arguments before. Frequently, in my recollection. And I would say that the most angry disputes come around because of people who are not so good at distinguishing the prescriptive from the descriptive. And who are very, very keen to first kill the messenger.
Stupid JIF tricks, take eleven
November 3, 2020
As my longer term Readers are well aware, my laboratory does not play in the Glam arena. We publish in society type journals and not usually the fancier ones, either. This is a category thing, in addition to my stubbornness. I have occasionally pointed out how my papers that were rejected summarily by the fancier society journals tend to go on to get cited better than their median and often their mean (i.e., their JIF) in the critical window where it counts. This, I will note, is at journals with only slightly better JIF than the vast herd or workmanlike journals in my fields of interest, i.e. with JIF from ~2-4.
There are a lot of journals packed into this space. For the real JIF-jockeys and certainly the Glam hounds, the difference between a JIF 2 and JIF 4 journal is imperceptible. Some are not even impressed in the JIF 5-6 zone where the herd starts to thin out a little bit.
For those of us that publish regularly in the herd, I suppose there might be some slight idea that journals towards the JIF 4-5 range is better than journals in the JIF 2-3 range. Very slight.
And if you look at who is on editorial boards, who is EIC, who is AE and who is publishing at least semi-regularly in these journals you would be hard pressed to discern any real difference.
Yet, as I’ve also often related, people associated with running these journals all seem to care. They always talk to their Editorial Boards in a pleading way to “send some of your work here”. In some cases for the slightly fancier society journals with airs, they want you to “send your best work here”….naturally they are talking here to the demiGlam and Glam hounds. Sometimes at the annual Editorial Board meeting the EIC will get more explicit about the JIF, sometimes not, but we all know what they mean.
And to put a finer point on it, the EIC often mentions specific journals that they feel they are in competition with.
Here’s what puzzles me. Aside the fact that a few very highly cited papers would jazz up the JIF for the lowly journals if the EIC or AEs or a few choice EB members were to actually take one for the team, and they never do, that is. The ONLY thing I can see that these journals can compete on are 1) rapid and easy acceptance without a lot of demands for more data (really? at JIF 2? no.) and 2) speed of publication after acceptance.
My experience over the years is that journals of interchangeable JIF levels vary widely in the speed of publication after acceptance. Some have online pre-print queues that stretch for months. In some cases, over a year. A YEAR to wait for a JIF 3 paper to come out “in print”? Ridiculous! In other cases it can be startlingly fast. As in assigned to a “print” issue within two or three months of the acceptance. That seems…..better.
So I often wonder how this system is not more dynamic and free-market-y. I would think that as the pre-print list stretches out to 4 months and beyond, people would stop submitting papers there. The journal would then have to shrink their list as the input slows down. Conversely, as a journal starts to head towards only 1/4 of an issue in the pre-print list, authors would submit there preferentially, trying to get in on the speed.
Round and round it would go but the ecosphere should be more or less in balance, long term. right?