DINGO!

May 4, 2012

[click to embiggen]

 

According to the Maryland Court of Appeals.

A new ruling makes it easier for anyone attacked by a pit bull or pit bull mix in Maryland to take legal action against the dog’s owner.

The Maryland Court of Appeals ruling declares pit bulls as a breed are “inherently dangerous,” and the owner of a pit bull or a cross-bred pit that attacks is strictly liable for damages, as is any landlord who rents to a pit bull owner.

From this, which appears to be the decision, we get more clarity:

Upon a plaintiff’s sufficient proof that a dog involved in an attack is a pit bull or a pit bull
cross, and that the owner, or other person(s) who has the right to control the pit bull’s
presence on the subject premises (including a landlord who has a right to prohibit such dogs
on leased premises) knows, or has reason to know, that the dog is a pit bull or cross-bred pit
bull, that person is liable for the damages caused to a plaintiff who is attacked by the dog on
or from the owner’s or lessor’s premises. In that case a plaintiff has established a prima facie
case of negligence. When an attack involves pit bulls, it is no longer necessary to prove that
the particular pit bull or pit bulls are dangerous.

Sick of reading these news accounts? I am. Try it yourself. Google pitbull attack on any given day.

UPDATE: For those that want to play “Pitbull Denialism” along with me, download your handy DINGO card.

This is huge. Previously the only IC that, to my knowledge, made their funding data available was the NIGMS. We grant geeks were big fans, even those of us who don’t seek funding from that particular Institute of the NIH.

Well, apparently NCI has joined the party….I do hope this is a sign of things to come at other ICs. Do note that this comes in the wake of some announced policy changes from NCI head Varmus which caused some consternation. Check the comments over at writedit’s pad. I concluded that this was just business as usual (i.e., as already practiced by numerous other ICs).

meh. he’s actually talking normal stuff here. I read him as saying the payline is 7%ile (he says priority score but I suspect he means percentile) and then, as is totally normal business as usual for many ICs, he’s talking about the gray zone wherein they violate the strict order of review for various Programmatic reasons.

Nothing to see here, save 7%ile is the lowest payline I’ve heard mentioned as such…

Given such practices, however, we are all intensely curious about the mysterious grey zone behavior. I have asserted in the past that I think the NIGMS data very likely stand as proxy for most, if not all, other ICs in the broad strokes. (The reason is that they dovetail nicely with the tiny bits of info that sneak out around the corners for the other ICs, if one is inclined to follow the breadcrumbs.) Importantly, the grey zone pickups are not randomly distributed. They are more likely the closer the score is to the payline. Well, now I have another data point..

First up, the Experienced Investigator graph:

yep. looks very familiar.

ok, how about the New Investigators?

hmm, notice that percentile skew? Now let’s see about the ESI folks:

Yep.

Okay, so what? Well I think this should continue to motivate people to keep the heat on whatever NIH representative happens to be listening. POs or those poor, poor higher-ups that have to get up in front of a room of agitated PIs and put a happy face on things.

My point is that this skew shows that study sections are not responding to the clear intents and desire of the NIH. I.e., to treat newb investigators more fairly (or “generously” one might argue). And just like with any other initiative of the NIH with respect to review, I assume that they are serious about it. They’ve shown this, but changing ESI paylines, making greyzone pickups more frequently, etc. So why not fix the problem at the point of review?

1) First off, you would think that both Program and the reviewers would see that their refusal to treat ESI apps in the mix with the rest decreases their input even further. We all have the experience that the tightest discussion and the most agonized decision making as an assigned reviewer comes at the perceived payline. For the obviously top applications, all we’re looking to do is to make an obvious argument. For the ones that are going to get triaged, or nearly triaged, well the tendency is to just hit the high points, slap on a few StockCritiques and assign a score. Even if the app is discussed way down in ~30-40%ile land, there isn’t going to be so much argument about the exact score range. The other reviewers aren’t going to be so engaged trying to decide which end of the post-discussion score range they should go with. Not like they will with applications that appear to be right around the perceived payline.

2) Next, this is a symptom of a larger problem. I.e., for the NIH trying to get the review panels on board with their broader goals. Take “Innovation”. Despite a lot of hoopla in launching a new review approach, the data showed (thanks again, NIGMS) that review outcome was driven mostly by the same old, same old. I.e., Significance and Approach. The same problem applies if the ICs choose to fix this by scrutinizing the grey zone critiques for the ones that seem most “Innovative”….panels haven’t discriminated the pool very well on that factor. More variance, more influence of the PO.

3) I think a lot of reviewers have no concept of these broader statistical trends. They are unaware of the data, blind to study section cultural influences and generally just haven’t thought things through very well. It may be that some of these people really believe in a different type of outcome for their study section and their subfield. But they have no notion of where the problem lies, nor that it is fixable.

It is most assuredly a fixable problem. I have two themes that I’ve pursued on the blog. First, education of study section members with respect to what they are doing. The funding data, such as the NCI and NIGMS charts posted and linked above, is the start of this. I’d like to see these outcomes be made available to study section reviewers, right down to the level of their own review panel. Second, the solution of competing biases. Anytime there is human judgement, there is bias. Anytime. The only solution that offers high confidence of having an effect is the competition of biases. This is why the panels are explicitly representative of geography, sex, ethnicity and institution type/size. What they are not representative on is the Newb/Experienced PI axis. (Also, one might argue that the Innovative!!!1111!!/Conservative PI axis has some skew, but that’s a chat for another day.)

@boehninglab was skeptical that there was any point in talking about these issues. I, naturally, am of the opinion that the surest way to prevent things you want to see happen is to remain silent. Sure, there are never any guarantees that you position will change anything at the NIH, but if you don’t say something then there is a guarantee you won’t be heard.

So comment on the NIH sites, in this case the NCI one. Let them know how you see their behavior and why it is good or bad for the science in your subfield.

[h/t: @salsb]
__
ps. check the NCI page for the R21 data, also interesting.
update: pps, PhysioProf noted that the score distribution for Experienced/New Investigators is much more similar for R21s than for R01s. Interesting to consider why that might be so. I would point to the “starter grant” bias….