A Letter to the Editor from Princetonian alumnus and Princetonian mother.

A few weeks ago, I attended the Women and Leadership conference on campus that featured a conversation between President Shirley Tilghman and Wilson School professor Anne-Marie Slaughter, and I participated in the breakout session afterward that allowed current undergraduate women to speak informally with older and presumably wiser alumnae. I attended the event with my best friend since our freshman year in 1973. You girls glazed over at preliminary comments about our professional accomplishments and the importance of networking. Then the conversation shifted in tone and interest level when one of you asked how have Kendall and I sustained a friendship for 40 years. You asked if we were ever jealous of each other. You asked about the value of our friendship, about our husbands and children. Clearly, you don’t want any more career advice. At your core, you know that there are other things that you need that nobody is addressing. A lifelong friend is one of them. Finding the right man to marry is another.

Jesus. The “MRS degree”? What fucking year is this again? 2013, right?

Oh, right. It’s because the elite of this world have such special problems in this regard, isn’t it?

As Princeton women, we have almost priced ourselves out of the market. Simply put, there is a very limited population of men who are as smart or smarter than we are. And I say again — you will never again be surrounded by this concentration of men who are worthy of you.

Of course, once you graduate, you will meet men who are your intellectual equal — just not that many of them. And, you could choose to marry a man who has other things to recommend him besides a soaring intellect. But ultimately, it will frustrate you to be with a man who just isn’t as smart as you.

So Princeton has cornered the market on smart men, eh? What easily falsifiable claptrap. Maybe once these Precious Princetonian Princesses are out in the world they find that the “smart men” aren’t enamored of elitist, pretentious twits who have fully embraced their ILAF snobbery? naaahh…. couldn’t be.

Here is another truth that you know, but nobody is talking about. As freshman women, you have four classes of men to choose from. Every year, you lose the men in the senior class, and you become older than the class of incoming freshman men. So, by the time you are a senior, you basically have only the men in your own class to choose from, and frankly, they now have four classes of women to choose from. Maybe you should have been a little nicer to these guys when you were freshmen?

If I had daughters, this is what I would be telling them.

I don’t even know where to start. The assumption that you can only marry a man your age or older if you are a woman? This woman has basically failed to mature past the highschool prom level. My goodness what a twit. Or is this really about the underclassmen failing to put out enough for her darling boys who allegedly have their pick of any woman in the world?

I am the mother of two sons who are both Princetonians. My older son had the good judgment and great fortune to marry a classmate of his, but he could have married anyone. My younger son is a junior and the universe of women he can marry is limitless.

Rest easy, o ye Editors of Glamour Magazines of Science. I have been reminded that there are many who will be up against the wall before you, come the revolution.

__
Ivy League Asshole Factories, coined by our good blog friend bill

Jane Goodall, Plagiarist

March 27, 2013

From the WaPo article:

Jane Goodall, the primatologist celebrated for her meticulous studies of chimps in the wild, is releasing a book next month on the plant world that contains at least a dozen passages borrowed without attribution, or footnotes, from a variety of Web sites.

Looks pretty bad.

This bit from one Michael Moynihan at The Daily Beast raises the more interesting issues:

No one wants to criticize Jane Goodall—Dame Goodall—the soft-spoken, white-haired doyenne of primatology. She cares deeply about animals and the health of the planet. How could one object to that?

Because it leads her to oppose animal research using misrepresentation and lies? That’s one reason why one might object.

You see, everyone is willing to forgive Jane Goodall. When it was revealed last week in The Washington Post that Goodall’s latest book, Seeds of Hope, a fluffy treatise on plant life, contained passages that were “borrowed” from other authors, the reaction was surprisingly muted.

It always starts out that way for a beloved writer. We’ll just have to see how things progress. Going by recent events it will take more guns a’smokin’ in her prior works to start up a real hue and cry. At the moment, her thin mea culpa will very likely be sufficient.

A Jane Goodall Institute spokesman told The Guardian that the whole episode was being “blown out of proportion” and that Goodall was “heavily involved” in the book bearing her name and does “a vast amount of her own writing.” In a statement, Goodall said that the copying was “unintentional,” despite the large amount of “borrowing” she engaged in.

Moynihan continues on to catalog additional suspicious passages. I think some of them probably need a skeptical eye. For example I am quite willing to believe a source might give the exact same pithy line about a particular issue to a number of interviewers. But this caught my eye:

Describing a study of genetically modified corn, Goodall writes: “A Cornell University study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: their caterpillars reared on milkweed leaves dusted with Bt corn pollen ate less, grew more slowly, and suffered higher mortality.”

A report from Navdaya.org puts it this way: “A 1999 Nature study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: butterflies reared on milkweed leaves dusted with bt corn pollen ate less, grew more slowly, and suffered higher mortality.” (Nor does Goodall mention a large number of follow-up studies, which the Pew Charitable Trust describes as showing the risk of GM corn to butterflies as “fairly small, primarily because the larvae are exposed only to low levels of the corn’s pollen in the real-world conditions of the field.”

And here is the real problem. When someone who has a public reputation built on what people think of as science weighs in on other matters of science, they enjoy a lot of trust. Goodall certainly has this. So when such a person misuses this by misrepresenting the science to further their own agenda…it’s a larger hurdle for the forces of science and rational analysis to overcome. Moynihan is all over this part as well:

One of the more troubling aspects of Seeds of Hope is Goodall’s embrace of dubious science on genetically modified organisms (GMO). On the website of the Jane Goodall Foundation, readers are told—correctly—that “there is scientific consensus” that climate change is being driven by human activity. But Goodall has little time for scientific consensus on the issue of GMO crops, dedicating the book to those who “dare speak out” against scientific consensus. Indeed, her chapter on the subject is riddled with unsupportable claims backed by dubious studies.

So in some senses the plagiarism is just emblematic of un-serious thinking on the part of Jane Goodall. The lack of attribution is going to be sloughed off with an apology and a re-edit of the book, undoubtedly. We should not let the poor scientific thinking go unchallenged though, just to raise a mob against plagiarism. The abuse of scientific consensus is a far worse transgression.

Fun NIH RePORTER tricks

March 27, 2013

Select your favorite ICs of interest in the Agency/Institute/Center field.
Enter %R56% in the Project Number field.
Submit Query.
Click on various grants and hit the History tab

Grind teeth in impotent rage.

This is great. Skipping to the part where Justice Kagen is grilling the lawyer Charles J. Cooper, Esq, on the harms of permitting gay marriage…..

JUSTICE KAGAN: Well, could you explain that a little bit to me, just because I did not pick this up in your briefs.
What harm you see happening and when and how and — what — what harm to the institution of marriage or to opposite-sex couples, how does this cause and effect work?

MR. COOPER: Once again, I — I would reiterate that we don’t believe that’s the correct legal question before the Court, and that the correct question is whether or not redefining marriage to include same-sex couples would advance the interests of marriage as a -JUST

Justice Kennedy went to work on him for evading, leading to this incoherent blather from Cooper.

But consider the California voter, in 2008, in the ballot booth, with the question before her whether or not this age-old bedrock social institution should be fundamentally redefined, and knowing that there’s no way that she or anyone else could possibly know what the long-term implications of — of profound redefinition of a bedrock social institution would be. That is reason enough, Your Honor, that would hardly be irrational for that voter to say, I believe that this experiment, which is now only fairly four years old, even in Massachusetts, the oldest State that is conducting it, to say, I think it better for California to hit the pause button and await additional information from the jurisdictions where this experiment is still maturing.

Emphasis added.

HAHAHAHAAAHAHAH! In non-court parlance…Dude, they got NOTHING! This is pathetic, right? That the justification is to wait-n-see how it works out in some other state? When has it EVER been the case that protecting rights has worked out poorly and led to a lasting reconsideration? I mean, the case at hand is exactly this but they only had like 6 months to experiment (and as far as I know the comparative handful of gay marriages in CA during the window of opportunity has not destroyed heterosexual marriage in CA yet, right? There would’ve been news articles.)

It was left to Supreme Troll Scalia to rescue the point…tra-la-laaaaaaa!

JUSTICE SCALIA: Mr. Cooper, let me — let me give you one — one concrete thing. I don’t know why you don’t mention some concrete things. If you redefine marriage to include same-sex couples, you must — you must permit adoption by same-sex couples, and there’s -there’s considerable disagreement among — among sociologists as to what the consequences of raising a child in a — in a single-sex family, whether that is harmful to the child or not. Some States do not — do not permit adoption by same-sex couples for that reason.

Let me GIVE YOU an argument??????

Justice Ginsburg pokes holes in Scalia’s nonsense with regard to California’s gay adoption regulations and then Scalia comes out with a classic comment:

JUSTICE SCALIA: I — it’s true, but irrelevant. They’re arguing for a nationwide rule which applies to States other than California, that every State must allow marriage by same-sex couples. And so even though States that believe it is harmful — and I take no position on whether it’s harmful or not, but it is certainly true that — that there’s no scientific answer to that question at this point in time.

HHAHAHAAHAAHAHAHAAHAAH “I take no position” HAHAHAAHAHAAA!!!!!!!111!!!!!!

Sure you don’t, Scalia, sure you don’t.

Note: Academy of Pediatrics backs gay marriage; says kids raised in such families do just as well

writedit said:

but with only 6 months left in the FY, this in fact translates into a 10% cut in their remaining appropriation. More than 80% of that appropriation is already committed to salaries, intramural research, and ongoing awards. This means that the small sliver left to make new awards takes the brunt of the cut.

I never like these types of analyses because they assume that the ICs aren’t anticipating the coming events. As if they are spending willy-nilly assuming they will get as much or more appropriated funds as they did in the past year. Now, maybe this is true but we can’t know for sure if their belt has or has not been tightened already in many areas. Maybe they have hiring freezes, we’ve heard some rumours about cutting back the travel budgets and maybe the Intramural labs are taking an early haircut. Certainly a smart manager would have been acting to assume the sequester, no? And for dang sure assuming a Continuing Resolution (CR) that held funding at the level of the past Fiscal Year where there was a budget.

One thing we can see is the IC by IC behavior from 1Dec to 31Mar in terms of rolling out new R01s and other mechanisms. I find that many ICs are indeed conservative under CRs with very few grants starting 1Dec (first possible start date for the Feb/Mar submissions) versus, say, what happens in the 1Jul (first possible start date for the Oct/Nov submissions) deadline. Instead, the 1Dec awards usually are held off (save for a trickle) until a new budget and/or (as now) a full year CR is passed.

One of my ICs of interest got out 20 new R01s in Jan, 10 in Feb and 7 in Mar, for example. None in Dec. They funded 173 new R01s in FY2012, 122 in FY2011 and 167 in FY2010.

Three (rounds) times 37 is 111. This value is ~90% of 123.

Now yes, of course, new R01s are only one part of the picture and it would not take very many shifts of Programmatic priorities to continuation grants, smaller or larger mechs, etc to throw off my example here. But let us, for arguments’ sake, credit that this is representative of their thinking.

This particular IC is acting as if they expected the sequester to be the rule of the day for FY2013. Right? They are funding conservatively up to this point in the year by only funding about 90% of the lowest local nadir in new grants, i.e., the FY2011 number. From this perspective, they have not pushed off the sequester burden into the “remaining appropriation”, i.e., the final 6 months. They have anticipated the whole year by their behavior in the first 6 months.

One can only hope that they have been similarly conservative with their other expenditures, of course. The one you would be seeing, DearReader, is the cuts applied across the board to the noncompeting renewals that have come due since December.

Are you hearing that budgets have been trimmed by 10% or that PIs are dancing in the streets with relief at getting their whole budget, unchanged from the proposal (or the cut they took last year, more realistically)?

I dunno, maybe I am just hoping that the sequester effects will be no worse than we already anticipated. Still, from the data that we can see, the ICs seem very committed to using budgetary reductions and conservative funding throughout the year to keep their behavior pretty steady and similar to what is predicted. It is very rare that one fails to see a small flurry of left-over money fly out the door for late pickups on Sept 30, from what I can recall. And I can’t ever remember a whoopsie where the number of funded awards for the third start date cycle in the FY crashes significantly downward.

NIH ICs are conservative in my experience and at least in this case it works to quiet our direst fears about the rest of the year.

A new blog on drug toxicology has recently appeared and I think some of my Readers will want to bookmark The Dose Makes The Poison.

What is it about? Well, the Intro post indicates:

So, a long time ago in a land far, far away, a brilliant scientist named Paracelsus (who is considered by many a toxicologist throughout time, to be the ‘Father of Toxicology’) wrote:

“Alle Dinge sind Gift, und nichts ist ohn Gift; allein die Dosis macht, dass ein Ding kein Gift ist.”

Trudat! A few more posts have appeared already….

….TV Shows Aren’t The Real World

Even though it doesn’t really make sense, I still want this mass spec! The sample that was analyzed was gastric contents of a decedent. It identifies “chicken stock”, coffee, and cocoa!


Analogue or not an analogue: that is the question!

Currently, cases involving the determination of a controlled substance analogue involve dueling chemists, toxicologists and pharmacologist as there is no consensus in the scientific community regarding what exactly is a controlled substance analogue. Typically, the prosecuting attorneys will have consultation and testimony from the DEA chemists or toxicologists/pharmacologists while the defense will have consultation and testimony from chemists and toxicologists/pharmacologists from other entities. The decision boils down to opinion vs. opinion.

The return of double doc

March 22, 2013

Our good blog friend drdrA returns!

I’m sure you’ve noticed that I’ve taken three steps back from the blogging business for a while now. Although I don’t want to provide an exhaustive list of reasons for why I did this, I do want to offer a brief explanation. The first, and probably most important reason, is that I don’t feel like I had anything urgent to say- and when I don’t have anything to say it is better just to keep my mouth put my keyboard down rather than to splatter some drivel out there.

What was I thinking

March 22, 2013

Greybearded TDWFs FTW

March 21, 2013

this is making the rounds…

From the PBS News Hour

Read the rest of this entry »

from a self described newProf at doc becca’s digs.

Last week, the first NIH proposal I wrote with PI status was rejected… I knew things were bad, but it still hurts…Problem is, I don’t know how to allocate my time between generating more preliminary data/pubs and applying for more grants. How many grants does the typical NIH- and/or NSF-funded (or wannabe-funded) TT prof write per year before getting funded?

It is not about what anyone else or the “typical” person has done.

It is about doing whatever you possibly can do until that Notice of Grant Award arrives.

My stock advice right now is that you need to have at least one proposal going in to the NIH for each standard receipt date. If you aren’t hitting it at least that hard, before you have a major award, you aren’t trying. If you think you can’t get out one per round…. you don’t really understand your job yet. Your job is to propose studies until someone decides to give your lab some support.

My other stock advice is take a look at the payline and assume those odds apply to you. Yes, special snoflake, you.

If the payline is 10%, then you need to expect that you will have to submit at least 10 apps to have a fighting chance. Apply the noob-discount and you are probably better off hitting twice that number. It is no guarantee and sure, the PI just down the hall struck it lucky with her first Asst Prof submission to the NIH. But these are the kinds of numbers you need to start with.

Once you get rolling, one new grant and one revised grant per round should be doable. They are a month apart and a revision should be way easier. After the first few, you can start taking advantage of cutting and pasting a lot of the grant text together to get a start on the next one.

Stop whining about preliminary data. Base it on feasibility and work from there. Most figures support at least a half dozen distinct grant applications. Maybe more.

I never know for sure how hard my colleagues are working when it comes to grant submissions. I know what I do…and it is a lot. I know what a subset of my other colleagues do and let me tell you, success is better correlated with effort (grants out the door) than it is with career rank. That has an effect, sure, but I know relatively older investigators who struggle to maintain stable funding and ones who enjoy multi-grant stability. They are distinguished to some extent by how many apps they get out the door. Same thing for junior colleagues. They are trying to launch their programs and all. I get this. They have to do a lot of setup, training and even spend time at the bench. But they also tend to have a very wait-and-see approach to grants. Put one in. Wait for the result. Sigh. “Well maybe I’ll resubmit it next round”. Don’t do this, my noob friends. Turn that app around for the next possible date for submission.

You’ll have another app to write for the following round, silly.

Failure to Replicate

March 20, 2013

I should have put that in quotes because it actually appears in the title of this new paper published in Neuropsychopharmacology:

Hart AB, de Wit H, Palmer AA. Candidate gene studies of a promising intermediate phenotype: failure to replicate. Neuropsychopharmacology. 2013 Apr;38(5):802-16. doi: 10.1038/npp.2012.245. Epub 2012 Dec 3. [PubMed]

ResearchBlogging.orgfrom the Abstract alone you can get a sense

We previously conducted a series of 12 candidate gene analyses of acute subjective and physiological responses to amphetamine in 99-162 healthy human volunteers (ADORA2A, SLC6A3, BDNF, SLC6A4, CSNK1E, SLC6A2, DRD2, FAAH, COMT, OPRM1). Here, we report our attempt to replicate these findings in over 200 additional participants ascertained using identical methodology. We were unable to replicate any of our previous findings.

The team, with de Wit’s lab expert on the human phenotyping and drug-response side and Palmer’s lab expert on the genetics, has been after genetic differences that mediate differential response to amphetamine for some time. There’s a human end and a mouse end to the overall program which has been fairly prolific.

In terms of human results, they have previously reported effects as varied as:
-association of an adenosine receptor gene polymorphism with degree of anxiety in response to amphetamine
-association of a dopamine transporter gene promotor polymorphism with feeling the drug effect and diastolic blood pressure
-association of casein-kinase I epsilon gene polymophisms with feeling the drug effect
-association with fatty acid amide hydrolase (FAAH) with Arousal and Fatigue responses to amphetamine
-association of mu 1 opioid receptor gene polymorphisms with Amphetamine scale subjective report in response to amphetamine

There were a dozen in total and for the most part the replication attempt with a new group of subjects failed to confirm the prior observation. The Discussion is almost plaintive at the start:

This study is striking because we were attempting to replicate apparently robust findings related to well-studied candidate genes. We used a relatively large number of new participants for the replication, and their data were collected and analyzed using identical procedures. Thus, our study did not suffer from the heterogeneity in phenotyping procedures implicated in previous failures to replicate other candidate gene studies (Ho et al, 2010; Mathieson et al, 2012). The failure of our associations to replicate suggests that most or all of our original results were false positives.

The authors then go on to discuss a number of obvious issues that may have led to the prior “false positives”.

-variation in the ethnic makeup of various samples- one reanalysis using ancestry as covariate didn’t change their prior results.

-power in Genome-Wide association studies is low because effect sizes / contribution to variance by rare alleles is small. they point out that candidate gene studies continue to report large effect sizes that are probably very unlikely in the broad scheme of things…and therefore comparatively likely to be false positives.

-multiple comparisons. They point out that not even all of their prior papers applied multiple comparisons corrections against the inflation of alpha (the false positive rate, in essence) and certainly they did no such thing for the 12 findings that were reported in a number of independent publications. As they note, the adjusted p value for the “322 primary tests performed in this study” (i.e., the same number included in the several papers which they were trying to replicate) would be 0.00015.

-publication bias. This discussion covers the usual (ignoring all the negative outcomes) but the interesting thing is the confession on something many of us (yes me) do that isn’t really addressed in the formal correction procedures for multiple comparisons.

Similarly, we sometimes considered several alternative methods for calculating phenotypes (eg, peak change score summarization vs area under the curve, which tend to be highly but incompletely correlated). It seems very likely that the candidate gene literature frequently reflects this sort of publication bias, which represents a special case of uncorrected multiple testing.

This is a fascinating read. The authors make no bones about the fact that they’ve found that no less than 12 papers that they have published were the result of false positives. Not wrong…not fraudulent. Let us be clear. We must assume they were published with peer review, analysis techniques and samples sizes that were (and are?) standard for the field.

But they are not true.

The authors offer up solutions of larger sample sizes, better corrections for multiple comparisons and a need for replication. Of these, the last one seems the best and most likely solution. Like it or not, research funding is limited and there will always be a sliding scale. At first we have pilot experiments or even anecdotal observations to put us on the track. We do one study, limited by the available resources. Positive outcomes justify throwing more resources at the question. Interesting findings can stimulate other labs to join the party. Over time, the essential features of the original observation or finding are either confirmed or consigned to the bin of “likely false alarm”.

This is how science progresses. So while we can use experiences like this to define what is a target sample size and scope for a real experiment, I’m not sure that we can ever overcome the problems of publication bias and cherry picking results from amongst multiple analyses of a dataset. At first, anyway. The way to overcome it is for the lab or field to hold a result in mind as tentatively true and then proceed to replicate it in different ways.

__
UPDATE: I originally forgot to put in my standard disclaimer that I’m professionally acquainted with one or more of the authors of this work.

Hart, A., de Wit, H., & Palmer, A. (2012). Candidate Gene Studies of a Promising Intermediate Phenotype: Failure to Replicate Neuropsychopharmacology, 38 (5), 802-816 DOI: 10.1038/npp.2012.245

Blogrolling: SciRants

March 19, 2013

Check out this new blog by @boehninglab

In response to this recent comment from Dave,

You need people to do the work, but you don’t need AS MANY. No…way. Not in a million years. Give me a break DM. You know this…as well as I do.

which he made as an elaboration on this comment

The role of the NIH is to fund science, not prop up the entire community by providing them with salaries. I see way to many R01s with multiple, multiple techs, co-PIs and post-docs that do zero work on the grant in question. The grant is used purely for salaries and bennies. I think that is wrong, personally.

I had this response:

I do. In fact I need more. It is my considered, and by now relatively experienced, view that for may types of research (read: the ones I am most familiar with) the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget. Trainees on individual fellowships or training grants, undergrads working for free or work study discount, cross pollination with other grants in the lab (which often leads to whinging like your comment), pilot awards for small bits, faculty hard money time…all of these sources of extra effort are frequently poured into a one-R01 project. I think they are, in essence, necessary.

How about it, y’all? Do you see the amount of people-effort that can be afforded* within $250,000 in direct costs as covering the scope of work that is expected as reasonable output in your fields of interest? Be sure to specify the approximate contribution levels of PI, postdocs, grads, undergrads and techs and use appropriate salary ranges.

Current NRSA scale is here and salary cap is $179,700. You’ll have to look up your own benefits rates (20-25% of salary wouldn’t be that unusual) and local technician salary scales.

There are links on the idea of “productivity” under a grant award at the end of this post, the scatterplot posted by Jeremy Berg some time ago is highly relevant.
__
*don’t forget to add benefits on top of your salary estimate.

In email chatting with PP, as is our wont, I had the following query.

Do you think these “do it to Julia” muppethuggers really think they have the best objective solution? Or do they really know they are just looking out for número uno?

What do you think, Dear Reader?

For reference, Scicurious’ proposal

What if manuscript submission could be as good as a one-shot?

Like this: you submit a paper to a large umbrella of journals of several “tiers”. It goes out for review. The reviewers make their criticisms. Then they say “this paper is fine, but it’s not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions”. Or they have the option to reject it outright for all the journals in question. Where there is discrepancy (as usual) the editor makes the call.

and the Neuroscience Peer Review Consortium.

The Neuroscience Peer Review Consortium is an alliance of neuroscience journals that have agreed to accept manuscript reviews from other members of the Consortium. Its goals are to support efficient and thorough peer review of original research in neuroscience, speed the publication of research reports, and reduce the burden on peer reviewers.

I think these schemes are flawed for a simple reason. As I noted in a comment at Sci’s digs….

Nobody bucking for IF immediately goes (significantly) down. They go (approximately) lateral and hope to get lucky. The NPRC is a classic example. At several critical levels there is no lateral option. And even if there was, the approximately equal IF journals are in side-eyeing competition…me, I sure as hell don’t want the editors of Biol Psychiatry, J. Neuro and Neuropsychopharmacology knowing that I’ve been rejected by one or two of the other ones first.

I also contest the degree to which a significantly “lower” journal thinks that it is, indeed, lower and a justifiable recipient of the leavings. Psychopharmacology, for example, is a rightful next stop after Neuropsychopharmacology but somehow I don’t think ol’ Klaus is going to take your manuscript any easier just because the NPP decision was “okay, but just not cool enough”. Think NPP and Biol Psych are going to roll over for your Nature Neuroscience reject? hell no. Not until their reviewers say “go”.

This NPRC thing has been around since about 2007. I find myself intensely curious about how it has been going. I’d like to see some data in terms of how many authors choose to use it (out of the total manuscripts rejected from each participating journal), how many paper are subsequently accepted at another consortium journal, the network paths between journals for those that are referred, etc.

My predictions are that referrals are very rare, that they are inevitably downward in journal IF and that they don’t help very much. With respect to this latter, I mean that I bet it is a further minority of the manuscripts that use this system that are subsequently accepted by the second journal editor on the strength of the original reviews and some stab by the authors at a minimal revision (i.e., as if they’d gotten minor-revisions from the original editor instead of rejection).

One fascinating, unknowable curiosity is the desk reject factor. The NPRC could possibly know how many of the second editors did desk rejects of the referred manuscripts based on the forwarded reviews. That would be interesting to see. But what they can’t know is how many of those would have been sent out for review if the reviews had not been forwarded. And if they had been sent out for review, what fraction would have received good enough reviews (for the presumptively more pedestrian journal) that they would have made it in.