Strategic Planning: How To Complete Fascinating Projects And Publish Them In Top Journals (UPDATED)

May 26, 2008

I have recently noticed some fatalism among some of our junior colleagues–post-docs and recently independent PIs–concerning their prospects of completing “interesting” projects and getting them published in top journals, either field-specific or C/N/S-level. For example, Sciencewoman recently posted about her feelings of inadequacy triggered by a more junior colleagues recent publication in a C/N/S-level journal:

Why is it that the other guy is getting a very high profile paper and I’m struggling to get results that will merit publication at all?

And her first answer (among others) was as follows:

He’s luckier than me. He got a project that worked.

The take home message of this DrugMonkey post is that “luck”–whatever the fuck that word even means–is only one factor among many. And the other factors are much more within the control of the scientist. To see what these factors are, and how to take control of them, jump below the fold. (Also below the fold is an update that addresses management of multiple projects to diversify risk/reward.)


As an entry point for further discussion, here are the rest of the potential answers Sciencewoman adduces for her question:

He dared to submit to C/N/S.
He’s got more resources to throw at the project than I do.
He worked harder than me. Hard work begets rewards. (Sure would be nice if that were dependable.)
His project has simply had more time devoted to it. Science like wine gets better with time. (Or not. My project seems to diminish with age.)
His co-authors are bigger names than mine. They’ve published in C/N/S before.
The reason he has big-name co-authors is that he had and used better connections than I did when we were at the grad school hunting stage of the game.
He hasn’t made choices that compromised balanced his professional aspirations against a spouse, and now a child.
It makes no difference to him whether one of his co-authors doesn’t like babies and maternity leave. It did to me.
He’s smarter than me. (But I don’t believe that.)

Now I am not saying in any way that some of these factors–mostly, but not completely, out of the control of any particular scientist–are irrelevant. But there are other factors that are even more important and are much more under the control of each scientist. They all relate to strategic planning, which I define here as the process of choosing both a scientific problem to address and appropriate methodological approaches for addressing it.
First, how do you choose a scientific problem to address? If your goal is, in part, to publish your work in a top field-specific or C/N/S-level journal, then whether the problem is interesting to you is only the beginning of the inquiry. You need to ask whether the problem is one that is central to your field (or, possibly, subfield), and one that you know to be of great interest to the field (or, possibly, subfield) as a whole, based on what currently exists in the peer-reviewed literature, reviews, commentary, and what you suss out at conferences.
Thus, another possible answer to Sciencewoman’s question is that her junior colleague with his PI mentor surveyed the state of their field and chose a scientific problem to address that they explicitly determined to be of importance. Of course, junior scientists cannot do this without the guidance and participation of their mentors, so a failure on this count is, at least in part, a failure of mentorship. This is what is meant by “scientific taste”, and without it, it will be difficult to publish in good journals, secure independent PI positions, and obtain grant support.
Second, how do you identify appropriate methodological approaches to addressing your problem? In answering this question, let’s use as a starting point two excerpts from a recent comment by MsPhD at Dr. J’s place:

For better or worse, I tend to go where the interesting questions are, whether it’s what I’m good at or not. Sometimes that means I have to work a lot harder to get the answer.

This represents poor judgment on MsPhD’s part and, more importantly, her post-doctoral mentor’s part.
Before new experiments are embarked upon in the lab, one must engage in an explicit decision-making process in which one assesses the costs and benefits of doing so. This analysis includes weighing, inter alia, the following factors:
(1) the potential best-case payoff of the experiments (this is what MsPhD would call identifying “where the interesting questions are”);
(2) the intrinsic difficulty of the experiments;
(3) how much experience we already have in the required experimental approaches (do they play to our methodological strengths?);
(4) how long will it take to perform the experiments;
(5) how likely is it that the experiments will end up inconclusive;
(6) how likely is it that our competitors are on the same path;
(7) are there better experimental approaches to the same question;
(8) are there even more important questions that would have to be forgone in order to address the one(s) under consideration.
Identifying “where the interesting questions are” is only the beginning of the process of deciding where to direct one’s experimental efforts.

Most of the people I know who have ‘excellent hands’ were just lucky enough to work on things that are either easier than what I do, or things that suit their particular talents perfectly. But there is some luck involved, or they’ve deliberately avoided things that they think will be too hard to do. Sometimes it is a wise choice.

Having “excellent hands” is not “luck” (whatever that even means). It is the outcome of a decision-making process that takes account of the costs and benefits outlined above. Failing to capitalize on existing methodological expertise is an error of judgment.
And if the perception is that existing methodological expertise is not suited to the “interesting questions”, then the error of judgment occurred earlier in spending the time and effort developing that expertise in the first place.
Sustained scientific success requires at least some foresight to stay in front of the methodological waves that course through a field. One of the ways to do this is to actually develop novel methodologies that can be used to address interesting questions.
Papers in top field-specific and C/N/S-level journals almost always involve some combination of an answer to a generally important question in a field and development/application of a generally applicable novel methodology for addressing that and related questions. Strategic planning for publishing in these kinds of journals requires explicit consideration of importance of question and methodological approaches. It is hardly “luck”.
UPDATE: In comments, Sciencewoman poses the following question:

Aside from testing hypotheses where you already know the outcome (and therefore not really pushing the envelope of knowledge), how do you factor out luck in determining whether or not your novel hypothesis pans out and you get a high-profile pub?

This raises a key aspect of strategic planning that should have gone into the body of the post (and will, via an update). Every laboratory, and every scientist within a laboratory, needs to have an explicit plan for managing risk/reward.
Just as financial investment fund managers do not put all of their assets into low-risk/low-yield US Treasury notes, they also do not put all of their assets into high-risk/high-yield junk bonds. Rather, they try to put together a balanced portfolio that combines the guaranteed, but modest, returns of low-risk/low-yield investments with exposdure to high-risk/high-yield investments.
This is exactly what every PI, and every scientist training under that PI, needs to do. An effective scientific portfolio contains a balance of low-risk/low-reward projects with a very predictable, but modest, outcome and high-risk/high-reward projects with an uncertain, but potentially high-impact, outcome. And because the latter are, by definition, uncertain, if one wants to maximize the likelihood of a high-impact payoff, one needs to expose oneself to more than one high-risk/high-reward scenario at a time.
Corollary to this kind of analysis is developing the judgment to know when to “cut bait” on a particular high-risk/high-reward project that appears unlikely to pay off, on the one hand, and when to “go all in” with effort on some other particular high-risk/high-reward project that appears likely to pay off, on the other.

20 Responses to “Strategic Planning: How To Complete Fascinating Projects And Publish Them In Top Journals (UPDATED)”

  1. ppsgirl Says:

    agreed, except, there is some degree of luck to who is reviewing the manuscript (referring to previous posts)

    Like

  2. ppsgirl Says:

    also…”luck,” as i define it, means “probability.” certainly, there are higher or lower probabilities of getting anything published. some of it is, therefore, luck.

    Like

  3. glt Says:

    Hmph.
    Working on the “central problems” as defined by the current publications and research trends does indeed lead to publications. But it doesn’t allow much for people to do research that sometimes fails (but contributes by failing – though most usually it ends up not being publishable), that explores and solves a minor problem that will be used by future researchers, or that simply does something because it looks interesting at the time – sometimes panning out and producing a nice publication, sometimes not.
    Really, as the cost of research increases and as the importance of publications increase (I’ve watched it happen over the last 40 or so years I’ve been involved in the field), I’ve seen researchers go much more often for the Least Publishable Unit, and for research that is more or less certain to be funded and published. Often they’ve said that other things interested them more, but that promotions, tenure, publications… all hinged on doing the safe mainstream stuff.

    Like

  4. Vinay Says:

    I concur with you right away, regarding the “luck” factor. Well argued. I look forward to eliminate luck from my future profile towards research.

    Like


  5. I’d agree with the general point that the harder, and more thoughtfully, one works, the luckier one gets.
    When Dr Hyde hears me being jealous about someone else’s success, he says that he thinks women are more likely to compare themselves unfavorably to others. Men, he says, compare themselves favorably to others. Or don’t think about others at all. Or something like that.
    That’s a factor in sciencewoman’s post I think you (surprisingly, for you) missed–though you’re right that she needs to focus on the things within her control, she also should remind herself that she’s a great scientist, rather than listen to the inner voice suggesting that she doesn’t have what it takes. Women are really hard on themselves. Possibly due to that whole patriarchy thingy.

    Like

  6. PhysioProf Says:

    Our goal here is to try to help people deploy the tools required to be great scientists, and to dispel the myth that it requires “luck” or “genius” or a Nobel-prize-winning mentor or whatthefuckever.

    Like

  7. juniorprof Says:

    When I was an undergrad student I took a class on the psychology of “free will”. In that class we read “Elbow Room” by Daniel Dennett. That little book helped me understand the relationship between “luck” and preparation in ways that have benefited my career over and over again. I strongly recommend it to anyone struggling with such factors in their career development.

    Like

  8. JSinger Says:

    When Dr Hyde hears me being jealous about someone else’s success, he says that he thinks women are more likely to compare themselves unfavorably to others. Men, he says, compare themselves favorably to others. Or don’t think about others at all. Or something like that.
    I think he’s confusing what men say to others (and what he himself may well think) with what they think.

    Like

  9. drdrA Says:

    I think about this A LOT, being in a similar position as Sciencewoman. I’ll just make you a list of a few points about this post.
    1. I know it sounds trite- but you make your own luck. You do- in all the ways PP mentioned,- a great project requires great planning. (and it’s true, you can still get kicked in the teeth by reviewers)…
    2. You can have the greatest project planning etc. in the universe and not be able to execute it if you have terrible ‘hands’ at the bench. Some people are fantastic thinkers but don’t have the manual dexterity, or the powers of observation, or whatever, required to be a good experimentalist themselves. The key to this is figuring out how to get it done despite the hands issue.
    3. About Sciencewoman’s situation specifically- wow, do I have a lot to say about this because it hits me close to home. We’ll just start with this- a young woman, junior faculty, with kids- should not let a comparison of herself to anyone else (and especially to single man with no other commitments) determine her worth or ability as a scientist. The C/S/N papers will come, but it will take a little time.

    Like

  10. sciencewoman Says:

    Thanks for your insightful comments PP. I’m still thinking about this luck business. What I’m trying to figure out is how luck is NOT a factor when it comes to negative results. If you develop a novel hypothesis and you do the experiments and the results support your hypothesis, you have a publishable paper. But if you develop a novel hypothesis, and your results don’t support it, either the project doesn’t get published or you work to reframe the project and publish it in a lower-tier journal. Aside from testing hypotheses where you already know the outcome (and therefore not really pushing the envelope of knowledge), how do you factor out luck in determining whether or not your novel hypothesis pans out and you get a high-profile pub?

    Like


  11. Our goal here is to try to help people deploy the tools required to be great scientists
    Of course! And one of those tools is how to not get into a negative thought-bog that ultimately drives you from science, or makes you fearful to do any but the safest experiments.

    Like

  12. PhysioProf Says:

    Aside from testing hypotheses where you already know the outcome (and therefore not really pushing the envelope of knowledge), how do you factor out luck in determining whether or not your novel hypothesis pans out and you get a high-profile pub?

    This raises a key aspect of strategic planning that should have gone into the body of the post (and will, via an update). Every laboratory, and every scientist within a laboratory, needs to have an explicit plan for managing risk/reward.
    Just as financial investment fund managers do not put all of their assets into low-risk/low-yield US Treasury notes, they also do not put all of their assets into high-risk/high-yield junk bonds. Rather, they try to put together a balanced portfolio that combines the guaranteed, but modest, returns of low-risk/low-yield investments with exposdure to high-risk/high-yield investments.
    This is exactly what every PI, and every scientist training under that PI, needs to do. An effective scientific portfolio contains a balance of low-risk/low-reward projects with a very predictable, but modest, outcome and high-risk/high-reward projects with an uncertain, but potentially high-impact, outcome. And because the latter are, by definition, uncertain, if one wants to maximize the likelihood of a high-impact payoff, one needs to expose oneself to more than one high-risk/high-reward scenario at a time.
    Corollary to this kind of analysis is developing the judgment to know when to “cut bait” on a particular high-risk/high-reward project that appears unlikely to pay off, on the one hand, and when to “go all in” with effort on some other particular high-risk/high-reward project that appears likely to pay off, on the other.

    Like


  13. Regarding risk: YES! I can’t believe how many graduate students don’t figure this out early on. Maybe some people spend so much time denigrating “descriptive” projects that they forget the value of a study that’s publishable regardless of outcome.

    Like

  14. whimple Says:

    It’s not “descriptive”… it’s “quantitative”. šŸ™‚

    Like

  15. msphd Says:

    Wow, I’m really sorry I read this.
    Where to begin. I don’t have time for a full-blown rant.
    Let’s start here: Of course, junior scientists cannot do this without the guidance and participation of their mentors
    WRONG. WRONG. WRONG. It does not work like that.
    Our mentors, beg to inform you, have ZERO novel ideas of their own.
    The projects they offer us are:
    a) “Here, work on what I proposed in my funded grant! It’s boring and won’t work, but it’s how I get funding, so…”
    b) “Fuck if I know. Go figure something out.”
    Moving on.
    I said: For better or worse, I tend to go where the interesting questions are, whether it’s what I’m good at or not. Sometimes that means I have to work a lot harder to get the answer.
    And you said This represents poor judgment on MsPhD’s part and, more importantly, her post-doctoral mentor’s part.
    BEG TO DIFFER. If you want to do ANYTHING NEW, it’s going to be HARD. I have had backup projects, and I have had hard projects. Both have worked. It has taken a while. It has been WORTH IT. Scientifically, there is no question, it has been worth it. I wouldn’t trade a boring easy project for my project, not for the world.
    And let me point out, I have NEVER met or heard of a ‘mentor’ who knows the ins and outs of technical things as well as the lab members do.
    In this day and age, there are no PIs who can keep up.
    I went to dinner recently with a couple of young, highly successful PIs, who confessed to me that they have no idea how to do any of these new techniques themselves. They have to ask their grad students to teach them. This is perfectly normal! Nothing to be ashamed of!
    But it belies the Apprenticeship part of the system. To get new things to work at the bench, you have to be willing to work at it. Yourself. Your mentor will not help you.
    Oh yeah, and one more thing. This shit where you said how do you choose a scientific problem to address? If your goal is, in part, to publish your work in a top field-specific or C/N/S-level journal, then whether the problem is interesting to you is only the beginning of the inquiry. You need to ask whether the problem is one that is central to your field
    WRT ScienceWoman? That is some condescending crap. What the hell makes you assume her project is not an interesting scientific problem central to her field? That’s pure assumption on your part, and pretty lame at that.
    Anyway, I’d say thanks for raising my blood pressure, but frankly I don’t need it.

    Like

  16. TeaHag Says:

    Am chewing on aspects of this right now because papers lead to funding… funding leads to more papers… and I’m convinced that this equilibrium is dynamic and dependent on strategy!
    Fascinating projects are often multi-component. So, a situation that suggests that there is “one simple answer to your simple question”, seems to have more potential as a single publication, rather than the basis of a “five-year-plan”. The caveat to this would seem to be if the answer to your problem is “the Meaning of Life”. If it seems likely to lead to interesting off-shoots, evaluate these side threads for their potential and determine priority. Some may have the potential to be more impactful than others and I think that this is key for quality publishing and sucessful funding. You may have to bite the bullet and down-rate the stuff that appeals to you most on an intellectual level because you have to evaluate significance strictly in the context of your field and where it’s going.
    The next step would be to determine what technical difficulties need to be surmounted to complete the work necessary for the paper. All of the paper, to a clear and reasonable publication point. It’s a terrible waste of time to design a project in such a way that the paper which will result will not make the journal you envisage because your material is not sufficiently “back-stopped”. C/N/S papers are tautly written with each figure illustrative of a key point multiply demonstrated. If you can’t write it without apologizing for the absence of additional pieces of data (e.g. studies in XXX KO mice would likely corroborate this finding), then it’s just not going to make it into the frontline stuff. So maybe you should plan to get the mice and do the darned experiment KWIM?
    Do you have/need collaborators who can help you with this? “No man is an island” and neither is a researcher. I think that it is a mistake not to work to develop collaborative relationships both within your institution and outside. These work best when you bring contrasting skills to the project, they can also enhance your publication rate and these papers will be concrete evidence of a collaborative relationship come grant time. They also listen to you moan. I don’t think that you even have to be at PI level to do this, I’ve had positive collaborative interactions as a post-doc even if they didn’t bear real fruit until a few years after we left the lab.
    What has been zapping me lately is the “I’m right and they’re wrong” aspect of research. By this I mean “my as yet unpublished data is correct” and what is in the literature is “mis-interpreted and/or garbage”. It really is a challenge to find a way to fund and publish work that “defines a new paradigm”, particularly one that contradicts a predominant view of the field. There is simply a de facto assumption that what is already in press is the best or most valid answer. Oddly enough, this is less of a problem in hotly contested areas of research but in my world, well it’s seems like it’s tablet-smashing time (please excuse OT reference).
    Wash/rinse/repeat… because what seems like the best plan today may need revision depending on progress/circumstance etc. only a few months from now.

    Like


  17. A very interesting and timely post (for my benefit). I have been in the lab for only a few months and just presented a mini-thesis proposal yesterday. The proposal consisted of 2 projects, the first one I proposed based on the absence of the data from the literature and the fact that I am highly skilled in the methodology required for the data. The second project involves theories and methodologies that are completely foreign to me, but that I would like to learn. So according to this post I should be in a good position because I have a mix of high / low risk projects. Except for the fact that my supervisor thinks both projects are low risk, while other PI’s consider them both high risk. How do I, as student to the specific field of study, make the assessment? I believe in taking responsibility for myself, but at the same time it will take time for me to develop a thorough understanding of the field.

    Like

  18. PhysioProf Says:

    Our mentors, beg to inform you, have ZERO novel ideas of their own.

    In this day and age, there are no PIs who can keep up.

    Just to be clear, MsPhD is speaking here about her own experiences–which seem to have involved a severe lack of effective mentorship. This is very sad, and represents a failure of the system.
    However, this does not mean that there are no good PIs who are highly creative, excellent methodologically, and extremely effective mentors. I know many.
    And by the way, MsPhD really, really, really needs to get past the canard that it is a failing of mentorship and lab leadership if a PI does not (or even cannot) sit down at the bench side by side with a trainee (“apprentice”) and teach the trainee the physical process of performing a particular technique. This has nothing to do with being a good PI and an effective mentor. It bears no correlation with whether a PI is good at generating novel ideas or techniques.
    I know that it is hard to see from the very limmited perspective of a post-doc who spends her day at the bench physically performing experiments, but the best troubleshooter of an existing technique and the most creative inventor of new techniques can be a PI who couldn’t possibly sit down and perform herself.
    Sitting at the bench or having good hands has nothing to do with being a good PI. Nothing. Nada. Zilch. There is no positive correlation.
    Some of the best PIs I know were nearly hopeless at the bench as trainees (and presumably would still be if they tried to do benchwork). And some of the most ineffective PIs I know–whose trainees are left to hang out to dry with no guidance at all and who tend to run away from their labs after a few years of no productivity at all–are themselves outstanding experimentalists and spend a lot of time at the bench.
    In fact, if anything, there may be some negative correlation. Those PIs who are outstanding experimentalists could be highly effective post-docs without even beginning to develop any of the skills required to be a PI: leading a group, troubleshooting experiments you didn’t perform yourself, etc.
    Those PIs who are not outstanding experimentalists had no choice as post-docs but to develop these other skills or fail. So they developed them. And now as PIs, surprise surprise, they are effective, while many of their “better hands” compatriots turn out to have no choice but to spend most of their time at the bench themselves doing the experiments that they cannot lead others at performing, and thus not only have no talent at mentoring, but also no time for it.

    Like

  19. DrugMonkey Says:

    Except for the fact that my supervisor thinks both projects are low risk, while other PI’s consider them both high risk.
    Awesome! You are already getting prepared for study section reviews!
    How do I, as student to the specific field of study, make the assessment? I believe in taking responsibility for myself, but at the same time it will take time for me to develop a thorough understanding of the field.
    Sure, and this is why you take multiple opinions and synthesize them with your own experiences and knowledge to come up with a plan. Do you have other committee members as tiebreaking votes? What did the supervisor and the other PI mean by high and low risk (this is not the same for all comers)? Do I take it that part of the exercise is that you were supposed to come up with projects of each type, or did this just emerge in the discussion?
    sorry to ask questions instead of helping but it is a bit hard to tell without more detail.
    at the very simplest level we might dissociate “risk” attributable to technical reasons from “risk” associated with empirical outcome.
    technical- what are the chances that this particular person, in this lab with these resources will be able to technically perform the experiments in a manner that leads to interpretable results.
    empirical- what are the chances that the outcome of the experiments will be show-stopping incredibly interesting data vs ho-hum. will the effort be repaid by the outcome so to speak

    Like

  20. Dr. Shellie Says:

    Our goal here is to try to help people deploy the tools required to be great scientists, and to dispel the myth that it requires “luck” or “genius” or a Nobel-prize-winning mentor or whatthefuckever.
    Awesome mission, and one I applaud. Keep up the good work.

    Like


Leave a comment