Thanks

November 27, 2014

Today in the USA we think of all the things we are thankful for.

I am thankful for another year of awesome from you, Dear Readers. Thanks for reading, for commenting, for writing your Congress Critters, for challenging the bad in academic careers, for donating to school classrooms and for generally giving a care about this world we live in.

And for the hilarity. Thanks for that.

UPDATE: All six project fully funded as of Nov 25. Thanks everyone!

I am not surprised but I am disappointed. The grand jury convened to consider the shooting of Michael Brown in Ferguson, Missouri, by Darrin Wilson has decided there are no grounds for a trial.

There is one tiny, but undeniably tangible, thing that I can do to register my feelings from afar.

Searching by Zip Code 63135 at Donors Choose I found quite a few hits on project proposals from the teachers of Ferguson.

I invite you to join me in donating to help the school children of Ferguson further their education.

Mrs. Hicks’ third grade classroom at Ferguson Central Elementary needs a rug for children to sit on for circle time. A rug.

Mr. Eye has to teach children at Ferguson Middle school in two classrooms at once.

Now my students in my second room have to cram around the doorway or in the other room and watch my instruction and then go back and try to remember how it was done unlike the students who can see the board form their seats and can follow along with instruction as I go through group activities. I utilize this method of instruction 50% of the time as the other 50% is project based. My students who are in a separate room because of space problems are at a disadvantage and have less time to work as they have to ask questions multiple times because they can not follow along as I give instruction, tips, and address concerns.

Mr Eye is asking for a little technological upgrading to help the poor unfortunate kids trying to learn in these circumstances. It’s the US in 2014 people.

Ms. Milliano’s Books project at Walnut Grove Elementary School is one that will knock you down with its topicality.

The students in our school come from low socioeconomic households, usually headed by single mothers. Many of our students are not exposed to matter and events on a national or global level. Most of the students in our building have little contact with kids their age beyond those in our school district.

We would use these news magazines to increase awareness of global and national current events, and promote discussion on how such events impact them. We will use the math series to demonstrate the importance and use of math in everyday life. Most importantly we will use the magazines to show the lives and accomplishments of people of their own age.


Update: ONWARD!

Mrs. Randoll’s students at Walnut Grove Elementary School need help learning math.

I am so excited about these number and shape manipulatives because they are items my students can, and will, use each day. My students will use these during Math Work Stations to build math skills such as: number identification, rote counting, number fluency, and sorting.

Mrs. Linder’s Technology project at Airport Elementary School in Berkeley, MO is devoted to children with significant challenges in addition to underfunded school systems and general socio-economic disparity.

I work with students with Individualized Eduction Plans with a variety of diagnoses including Autism, Intellectual Disability, ADHD, and many others. We work on skills to be successful in the school setting such as handwriting, cutting, feeding, and self care skills. These students are multi-sensory learners who benefit from repetition and learning in a variety of ways.

…and just like the circle time rug, I’m tearing up again. Help if you can.

The cello has the most beautiful sound of all the strings. Mrs. Burke’s music program at Berkeley Middle School could use a rack to secure the instruments. And dare she ask? A new upright bass?

Our school in the Ferguson-Florissant School District serves mostly students who are below the poverty line. They rarely have their own instruments. Young musicians use district instruments, some of which have been in the district since the 80’s. They rent them for $25.00/year. Some continue to rent instruments for the 10 years that they are in the orchestra program. This is difficult for many of our parents.

Expertise versus consistency

November 24, 2014

In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.

When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.

The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.

The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.

Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said “I thought I was giving it a really good a
score…you guys are telling me that wasn’t fundable?”

Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.

What say you, Dear Reader? How would you prefer to have your grants reviewed?

SFN 2014 Is Over

November 20, 2014

I woke up two hours early today with brain obsessing over our next research priorities, thanks to the meeting. Working as intended then.

For some reason I didn’t get around to visiting a single exhibitor other than NIH. First time for everything, right?

It is really great to see so many of the online people I’ve met through blogging and to see them succeeding with their science and careers.

The postdocs who have left our department in recent years for faculty jobs are kicking all kinds of science booty and that is nice to see.

Talk to Program, talk to Program, talk to Program…….

Catching up with the science homie(s) that you’ve known since postdoc or grad school is good for the soul. Dedicate one night for that.

Don’t bad talk anyone in the hearing of relative strangers…..really, you can’t know who likes and respects who and science is very small. I know 30,000 attendees makes you think it is large but….it isn’t.

Gossip about who is looking to find a new job….see above.

I ran into the AE who decided not to bother finding reviewers for our paper whilst at SfN and heroically, HEROICALLY people, managed not to demand immediate action.

A little bummed I missed the Backyard Brains folks this year…anybody see what shenanigans they are up to now?

You know when you go over to meet and butter up some PI, trainees? Don’t worry, it’s awkward from our end too.

Thought of the Day

November 18, 2014

It turns out that trolling someone else’s lab from a meeting with the cool study you just thought of that THEY NEED TO GET ON RIGHT NOW is even better than doing it to your own lab.

Way back in 2008 I expressed my dissatisfaction with the revision-cycle holding pattern that delayed the funding of NIH grants.

Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and “final” amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.

Oi. What a waste of everyone’s time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. “This more than adequately revised application….”

I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.

ReviewBiasGraph1The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. … What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. … If you care to step back Fiscal Year by Fiscal Year in the CRISP [RePORTER replaced this- DM] search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here … you will notice if you review a series of closely related study sections is that the relative “preference” for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.

In the mean time, we’ve seen the NIH first limit revisions to 1 (the A1 version) for a few years to try to get grants funded sooner, counting from the date of first submission. In other words, to try to get more grants funded un-Amended, colloquially at the -A0 stage. After an initial trumpeting of their “success” the NIH went to silent running on this topic during a sustained drumbeat of complaints from applicants who, apparently, were math challenged and imagined that bringing back the A2 would somehow improve their chances. Then last year the NIH backed down and permitted applicants to keep submitting the same research proposal over and over, although after A1 the clock had to be reset to define the proposal as a “new” or A0 status proposal.

I have asserted all along that this is a shell game. When we were only permitted to submit one amended version, allegedly the same topic could not come back for review in “new” guise. But guess what? It took almost zero imagination to re-configure the Aims and the proposal such that the same approximate research project could be re-submitted for consideration. That’s sure as hell what I did, and never ever got one turned back for similarity to a prior A1 application. The return to endless re-submission just allowed the unimaginative in on the game is all.

Type1-2000-2013 graph-2
This brings me around to a recent post over at Datahound. He’s updated the NIH-wide stats for A0, A1 and (historically) A2 grants expressed as the proportion of all funded grants across recent years. As you can see, the single study section I collected the data for before both exaggerated and preceded the NIH-wide trends. It was as section that was (apparently) particularly bad about not funding proposals on the first submission. This may have given me a very severe bias..as you may recall, this particular study section was one that I submitted to most frequently in my formative years as a new PI.

It was clearly, however, the proverbial canary in the coalmine.

The new Datahound analysis shows another key thing which is that the traffic-holding, wait-your-turn behavior re-emerged in the wake of the A2 ban, as I had assumed it would. The triumphant data depictions from the NIH up through the 2010 Fiscal Year didn’t last and of course those data were generated when substantial numbers of A2s were still in the system. The graph also shows taht there was a very peculiar worsening from 2012-2013 whereby the A0 apps were further disadvantaged, once again, relative to A1 apps which returns us right back to the trends of 2003-2007. Obviously the 2012-2013 interval was precisely when the final A2s had cleared the system. It will be interesting to see if this trend continues even in the face of the endless resubmission of A2asA0 era.

So it looks very much as though even major changes in permissible applicant behavior with respect to revising grants does very little. The tendency of study sections to put grants into a holding pattern and insist on revisions to what are very excellent original proposals has not been broken.

I return to my 2008 proposal for a way to address this problem:


So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a “desired” funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1 / 15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review….a great deal of time and effort would be saved.

Do NOT creep on junior female scientists.

Do NOT creep on female scientists.

Do NOT creep on ANYBODY at the Annual Meeting.

(Getting drunk is not an excuse, btw.)

Don’t so much as say anything creepy on your Facebook or Twitter or out loud where anyone can hear you.

Let everyone get as much science out of the Meeting as they can without having to worry about what your nasty self is up to, eh?

There will be a minisymposium on synthetic drugs at the upcoming Annual Meeting of the Society for Neuroscience in Washington DC. You can find it on Tuesday, Nov 18, 2014, 1:30 PM – 4:00 PM in WCC Ballroom B.

571.Bath Salts, Spice, and Related Designer Drugs: The Science Behind the Headlines

Michael Baumann and Jenny Wiley have organized it, appropriately given their respective expertise with cathinones and cannabinoids, respectively.

The abstract reads:

Recently there has been an alarming increase in the nonmedical use of novel psychoactive substances known as “designer drugs.” Synthetic cathinones and synthetic cannabinoids are two of the most widely abused classes of designer drugs. This minisymposium presents the most up-to-date information about the molecular sites of action, pharmacokinetics and metabolism, and in vivo neurobiology of synthetic cathinones and cannabinoids.

Looks to be a can’t-miss session for those of you who are interested in these drug classes.

I’ll extend my usual no-promises offer.

If you either drop your presentation details in the comments here or email me (drugmnky at the google mail) I’ll try to work it into my schedule.

If it is really cool (and I can understand it) I might even blog it.

Hope to catch up with old blog friends and meet a few new folks.

See y’all at BANTER.

via Twitter and retractionwatch, some hilarity that ended up in the published version of a paper*.

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns both because of evidence from previous research and inconsistencies with a priori predictions.

Careful what sorts of editorial manuscript comments you let slip through, people.

__
*apparently the authors are trying to correct the record so it may not last in the official version at the journal.

We all know about the oversupply problem in academic science wherein we are minting new PhDs faster than we create / open faculty jobs to house them.

Opinions vary on what is the proper ratio. All the way from “every PHD that wants a hard-money traditional Professorial job should get one” to “who cares if it is 1% or even 0.001%, we’re getting the best through extreme competition!”

(I fall in between there. Somewhere.)

How would we detect it, however, if we’ve made things so dismal that too many trainees are exiting before even competing for a job?

One way might be if a job opportunity received very few qualified applicants.

Another way might be if a flurry of postdoctoral solicitations in your subfield appeared. I think that harder flogging of the advert space suggests a decline in filling slots via the usual non-public recruiting mechanisms.

I am hearing / seeing both of these things.

Apple Pay and inconvenience

November 9, 2014

Neuropolarbear cannot imagine what is inconvenient about the use of credit cards.

This is most likely because he uses the highly efficient wallet and the highly efficient pants-pocket in preference to the purse.

Discuss.

Small happy moments

November 8, 2014

I love it when the reviewers really GET the paper, what we are trying to do and why it is important!

If asked to pick the top two good things that I discovered about grant review when I first went for a study section stint, I’d have an easy time. The funny thing is that they come from two diametrically opposed directions.

The first amazing good thing about study section is the degree to which three reviewers of differing subdiscipline backgrounds, scientific preferences and orientations agree. Especially in your first few study section meetings there is little that is quite as nerve-wracking as submitting your initial scores and waiting to see if the other two reviewers agreed with you. This is especially the case when you are in the extreme good or bad end of the scoring distribution.

What I usually found was that there was an amazingly good amount of agreement on overall impact / priority score. Even when the apparent sticking points / points of approbation were different across all three reviewers.

I think this is a strong endorsement that the system works.

The second GoodThing I experienced in my initial service on a study section was the fact that anyone could call a grant up off the triage pile for discussion. This seemed to happen very frequently, again in my initial experiences, when there were significantly different scores. In today’s scoring parlance, think if one or two reviewers were giving 1s and 2s and the other reviewer was giving a 5. Or vice versa. The point being to consider the cases where some reviewers are voting a triage score and some are voting a “clearly we need to discuss this” score. In the past, these were almost always called up for discussion. Didn’t matter if the “good” scores were 2 to 1 or 1 to 2.

Now admittedly I have no CSR-wide statistics. It could very well be that what I experienced was unique to a given study section’s culture or was driven by an SRO who really wanted widely disparate scores to be resolved.

My perception is that this no longer happens as often and I think I know why. Naturally, the narrowing paylines may make reviewers simply not care so much. Triage or a 50 score..or even a 40 score. Who cares? Not even close to the payline so let’s not waste time, eh? But there is a structural issue of review that has squelched the discussion of disparate preliminary-score proposals.

For some time now, grants have been reviewed in the order of priority score. With the best-scoring ones being take up for discussion first. In prior years, the review order was more randomized with respect to the initial scores. My understanding was the proposals were grouped roughly by the POs who were assigned to them so that the PO visits to the study section could be as efficient as possible.

My thinking is that when an application was to be called up for review in some random review position throughout the 2-day meeting, people were more likely to do so. Now, when you are knowingly saying “gee, let’s tack on a 30-40 min discussion to the end of day 2 when everyone is eager to make an earlier flight home to see their kids”…well, I think there is less willingness to resolve scoring disparity.

I’ll note that this change came along with the insertion of individual criterion scores into the summary statement. This permitted applicants to better identify when reviewers disagreed in a significant way. I mean sure, you could always infer differences of opinion from the comments without a number attached but this makes it more salient to the applicant.

Ultimately the reasons for the change don’t really matter.

I still think it a worsening of the system of NIH grant review if the willingness of review panels to resolve significant differences of opinion has been reduced.

One of the erroneous claims made by Steven McKnight in his latest screed at the ASBMB President’s space has to do with the generation of NIH funding priorities. Time will tell whether this is supposed to be a pivot away from his inflammatory comments about the “riff raff” that populate the current peer review study sections or whether this is an expansion of his “it’s all rubbish” theme. Here he sets up a top-down / bottom-up scenario that is not entirely consistent with reality.

When science funding used to be driven in a bottom-up direction, one had tremendous confidence that a superior grant application would be funded. Regrettably, this is no longer the case. We instead find ourselves perversely led by our noses via top-down research directives coming from the NIH in the form of requests for proposals and all kinds of other programs that instruct us what to work on instead of asking us what is best.

I find it hard to believe that someone who has been involved with the NIH system as long as McKnight is so clueless about the generation of funding priorities within the NIH.

Or, I suppose, it is not impossible that my understanding is wrong and jumps to conclusions that are unwarranted.

Nevertheless.

Having watched the RFAs that get issued over the years in areas that are close to my own interests, having read the wording very carefully, thought hard about who does the most closely-related work and seeing afterwards who is awarded funding… it is my belief that in many, many cases there is a dialog between researchers and Program that goes into the issuance of a specific funding announcement.

Since I have been involved directly in beating a funding priority drum (actually several instruments have been played) with the Program staff of a particular IC in the past few years and they finally issued a specific Funding Opportunity Announcements (FOA) which has text that looks suspiciously similar to stuff that I have written, well, I am even further confident of my opinion.

The issuance of many NIH RFAs, PAs and likely RFPs is not merely “top-down”. It is not only a bunch of faceless POs sitting in their offices in Bethesda making up funding priorities out of whole cloth.

They are generating these ideas in a dialog with extramural scientists.

That “dialog” has many facets to it. It consists of the published papers and review articles, conference presentations, grant applications submitted (including the ones that don’t get funded), progress reports submitted, conversations on the phone or in the halls at scientific meetings. These are all channels by which we, the extramural scientists, are convincing the Program staff of what we think is most important in our respective scientific domains. If our arguments are good enough, or we are joined by enough of our peers and the Program Staff agree there is a need to stimulate applications (PAs) or secure a dedicated pool of funding (RFAs, PASs) then they issue one of their FOA.

Undoubtedly there are other inputs that stimulate FOAs from the NIH ICs. Congressional interest expressed in public or behind the scenes. Agenda from various players within the NIH ICs. Interest groups. Companies. Etc.

No doubt. And some of this may result in FOAs that are really much more consistent with McKnight’s charge of “…programs that instruct us what to work“.

But to suggest that all of the NIH FOAs are only “top-down” without recognizing the two-way dialog with extramural scientists is flat out wrong.