It’s Uninterpretable!

August 6, 2020

No, it isn’t.

One of my favorite species of manuscript reviewer comment is that the data we are presenting are “uninterpretable”. Favorite as in the sort of reaction I get where I can’t believe my colleagues in science are this unbelievably stupid and are not completely embarrassed to say any such thing ever.

“Uninterpretable” is supposed to be some sort of easy-out Stock Critique, I do understand that. But it reveals either flagrant hypocrisy (i.e., the reviewer themselves would fall afoul of such a criticism with frequency) or serious, serious misunderstanding of how to do science.

Dr. Zen is the latest to run afoul of my opinion on this. He posted a Tweet:

and then made the mistake of bringing up the U word.

(his followup blog post is here)

Now, generally when I am laughing at a reviewer comment, it is not that they are using “uninterpretable” to complain about graphical design (although this occasionally comes into the mix). They usually mean they don’t like the design of the experiment(s) in some way and want the experiment conducted in some other way. Or the data analyzed in some other way (including graphical design issues here) OR, most frequently, a whole bunch of additional experiments.


“If the authors don’t do this then the data they are presenting are uninterpretable” – Reviewer # 3. It’s always reviewer #3.

Let me address Zen’s comment first. It’s ridiculous. Of COURSE the graph he presented is interpretable. It’s just that we have a few unknowns and some trust. A whole lot of trust. And if we’ve lost that, science doesn’t work. It just doesn’t. So it’s ridiculous to talk about the case where we can’t trust that the authors aren’t trying to flagrantly disregard norms and to lie to us with fake data. There’s just no point. Oh and don’t forget that Zen construed this in the context of a slide presentation. There just isn’t time for minutia and proving beyond any doubt that the presenter/authors aren’t trying to mislead with fakery.

Scientific communication assumes some reasonable common ground, particularly within a subfield. This is okay. When there is cross talk between fields with really, really different practices, ok, maybe a little extra effort is needed.

But this is a graph using the box-and-whiskers plot. This is familiar to the audience and indeed Zen does not seem to take issue with it. He is complaining about the exact nature of the descriptive statistic conventions in this particular box-and-whiskers plot. He is claiming that if this is not specified that the data are “uninterpretable”. NONSENSE!

These plots feature an indicator of central tendency of a distribution of observations, and an indicator of variablity in that distribution. Actually, most descriptive illustrations in science tackle this task. So..it’s familiar. This particular type of chart gives two indications of the variability- a big one and a small one. This is baseline knowledge about the chart type and, again, is not the subject of Zen’s apparent ire. The line is the central tendency. The box outlines the small indicator and the whiskers outline the big indicator. From this we move into interpretation that is based on expectations. Which are totally valid to deploy within a subfield.

So if I saw this chart, I’d assume it was most likely depicting the central tendency of a median or mean. Most likely the median, particularly if the little dot indicates the mean. The box therefore outlines the intraquartile range, i.e., the 25%ile and 75%ile values. If the central tendency is the mean, then it is most likely that the box outlines plus or minus one standard error of the mean or one standard deviation. Then we come to the whiskers. I’d assume it was either the 95% Confidence Interval or the range of values.

I do NOT need to know which of these minor variants is involved to “interpret” the data. Because scientific interpretation functions along a spectrum of confidence in the interpretation. And if differences between distributions (aha another ready assumption about this chart) cannot be approximated from the presentation then, well, it’s okay to delve deeper. To turn to the inferential statistics. In terms of if the small indicator is SD or SEM? meh, we can get a pretty fair idea. If it isn’t the SD or SEM around a mean, or the 25%ile/75%ile around a median, but something else like 3SEM or 35/65? Well, someone is doing some weird stuff trying to mislead the audience or is from an entirely disparate field. The latter should be clear.

Now, of COURSE, different fields might have different practices and expectations. Maybe it is common to use 5 standard deviations as one of the indicators of variability. Maybe it is common to depict the mode as the indicator of central tendency. But again, the audience and the presenter are presumably operating in approximately the same space and any minor variations in what is being depicted do not render the chart completely uninterpretable!

This is not really any different when a manuscript is being reviewed and the reviewers cry “Uninterpretable!”. Any scientific paper can only say, in essence, “Under these conditions, this is what happened”. And as long as it was clear what was done and the nature of the data, the reporting of can be interpreted. We may have more or fewer caveats. We may have a greater or smaller space of uncertainty. But we can most certainly interpret.

It sometimes gets even worse and more hilarious. I have this common area where we present data where the error bars are smaller than the (reasonably sized) symbols for some (but not all) of the groups. And we may have cases where the not-different (by inferential stats *and* by any rational eyeball and consideration of the data at hand) samples cannot be readily distinguished from each other (think: overlapping longitudinal or dose curves).

“You need to use color or something else so that we can see the overlapping details or else it is all uninterpretable!” – Reviewer 3.

My position is that if the eye cannot distinguish any differences this is the best depiction of the data. What is an error is presenting data in a way that gives some sort of artificial credence to a difference that is not actually there based on the stats, the effect size and a rational understanding of the data being collected.

The currency of science news

September 23, 2015

Ok, I take the point that journalism should not only talk about science upon the publication of a paper. 

Absolutely.

Science news can be much more fluid and the semi-public knowledge of a finding precedes formal publication.

But if there is a paper then it should be cited. Not merely linked obscurely, but properly cited
Scientists have been complaining about the failure of journalists to cite papers associated with their science news stories for ages. Ed knows this as well as anyone in science journalism. So I am confused as to what he is about here.

Professional Differences

June 10, 2015

In real science, i.e., that that includes variability around a central tendency, we deal with uncertainty.

We believe, however, that there IS a central tendency, an approximate truth, a phenomenon or effect. But we understand that any single viewpoint, datum or even whole study may only reflect some part of a larger distribution. That part may or may not always give an accurate viewpoint on the central tendency.

So we have professional standards in place that attempt to honestly reflect this variable reality.

Most simply, we present the central tendency of effects (e.g., mean, median or mode) and some indication of variability around that central tendency (standard error, interquartile range, etc).

Even when we present a single observation (such as a pretty picture of a kidney or brain slice all hilighted up with immunohistochemical tags) we assert that the image is representative. This statement means that this individual image has been judged to be close to the central tendency of the images that were used to generate the distributional estimates that contribute to the numerical central tendency and variability graphs / tables presented.

Now look, I understand that it is a bit of a joke. There are abundant cracks and redefinitions that point out that the “most representative image” really means “the image that best makes our desired point”.

There is a critically important point here. Our profession does not validate least representative image as an acceptable standard. Our professional standards say that it really should be representative if we ever present N=1 observations as data.

The alleged profession of journalism does not concern itself with truth and representativeness at all.

Their professional ethical standards, to the extent they exist, focus on whether the N=1 actually occurred AT ALL. In addition it focuses on whether that datum was collected fairly by their rules- i.e., was the quote on the record. Accuracy, again for the alleged profession, focuses only on episodic truth. Did this interviewee literally string these words together in this order at some point in time during the interview? If so, then the quote is accurate. And can be used in a published work to support the notion that this is what that interviewee saw, experienced or believes.

It is entirely irrelevant to the profession of journalism if that accident of strung-together words communicates the best possible representation of the truth of what that person saw, experienced or believes. Truth, in this sense, is not the primary professional ethical concern of journalism.

If the journalist pulls a quote out of an hour of conversation that best fits their pre-existing agenda with respect to the story they are planning to tell, it literally does not matter if every other sentence spoken by that person tells a different tale. It’s totally okay because that interviewee literally said those words in that order on the record (and it is on tape!).

If a scientist processes twenty brains in the experiement, grabs the one outlier that tells the story they want to tell, trashes the 19 that say the opposite and calls it a representative image (even if by inference if not directly)….this is fraud and data fakery. Not okay. Clearly outside the professional bounds.

That, my friends, is the difference.

And this is why you should only agree to talk to journalists* that will send you a nearly final draft of their piece to ensure that you have been represented accurately.

If every single one of us scientists insisted on this, it would go a long way to snapping the alleged profession into line. And greatly improve the accurate communication of scientific findings and understandings to nonspecialist** audiences.

__
Representative image from here.

*They exist! I have interacted with more than one of these myself.

**Reminder, we ourselves are nonspecialist consumers of much of the science-media. We have two interests here.

The life of the academic scientist includes responding to criticism of their ideas, experimental techniques and results, interpretations and theoretical orientations*.

This comes up pointedly and formally in the submission of manuscripts for potential publication and in the submission of grant applications for potential funding.

There is an original submission, a return of detailed critical comments and an opportunity to respond to those critiques with revisions to the manuscript / grant application and/or argumentative rebuttal.

As I have said repeatedly in this forum, one of my most formative scientific mentors told me that you should take each and every comment seriously. Consider what is being said, why it is being said and try to respond accordingly. This mentor told me that I would usually find that by considering even the most idiotic seeming comments seriously, the manuscript (or grant application) is improved.

I have found this to be a universal truth of my professional work.

My understanding of what I was told by my mentor, versus what I have filled in additionally in my similar comments to my own trainees is now very fuzzy. I cannot remember exactly how extensively this mentor stamped down what is now my current understanding. For example, it is helpful to me to consider that Reviewer #3 represents about 33% of peers instead of thinking of this person as the rare outlier. I think that one may be my own formulation. Regardless of the relative contributions of my mentor versus my lived experience, it is all REALLY valuable advice that I have internalized.

The paper and grant review process is not there, by any means, to prove to you beyond a shadow of a doubt** that the reviewer’s position is correct and you are wrong. A reviewer that provides citations for a criticism is not by any means the majority of my experience…although you will see this occasionally. Even there, you could always engage cited statements from an antagonistic default setting. This is unwise.

The upshot of this critique-not-proof system means that as a professional, you have to be able to argue against yourself in proxy for the reviewer. This is why I say you need to consider each comment thoughtfully and try to imagine where it is coming from and what the person is really saying to you. Assume that they are acting in good faith instead of reflexively jumping behind paranoid suspicions that they are just out to get you for nefarious purposes.

This helps you to critically evaluate your own product.

Ultimately, you are the one that knows your product best, so you are the one in position to most thoroughly locate the flaws. In a lot of ways, nobody else can do that for you.

Professionalism demands that you do so.

__
*Not an exhaustive list.

**colloquially, they are leading you to water, not forcing you to drink.

Citation Curmudgeonry

May 7, 2015

  • In response to a post at Potnia Theron’s blog:

Authorship decisions

December 12, 2014

Deciding who should and should not be on the author line of a science publication is not as simple as it seems. As we know, citations matter, publications matter and there are all sorts of implications for authorship of a science publication.

A question about this arose on the Twitts:

Of course, we start from a very basic concept. Authorship of a scientific paper is deserved when someone has made a significant contribution to that paper. I can’t distill it down any more than that. Nice and clean.

The trouble comes in when we consider the words significant and contribution.

This is where people disagree.

I also rely on another basic concept which is that someone should try to match, to a large extent, the practices within the subfields from which similar work is published. This can mean the journal itself, the scientific sub-domain or the institution type from which the paper is being submitted.

On to the specifics of this case.

First, do note that I understand that not everyone is in the position to wield ultimate authority when it comes to these matters. @forensictoxguy appears to be able to decide so we’ll take it from that perspective. I will mention, however, that even if you are not the deciderer for your papers, you can certainly have an opinion and advocate this opinion with the person in charge of the decision making.

My first observation is that there is nothing wrong with single-author papers. They might be rare these days but they do occur. So don’t be afraid to offer up a single-author paper now and again.

With that said, we now move on to the fact that the author line is a communication. Whether you are trying to convey a message about yourself as a scientist or not, your CV tells a story about you. And everything on there has potential implications for some audiences.

ethical, schmethical. Again, you don’t throw someone on a paper “just because”, you do it because they made a contribution. A contribution that you, as the primary/communicating/deciderering author, get to determine and evaluate. It is not impossible that these other people referred to in the Tweet made, or will make, a contribution. It could be via setting the environment (physical resources, administrative requirements, funding, etc), training the author or it could be through direct assistance with crafting the manuscript after all the work has been done. All of these are valid as domains for significant contribution.

This scenario of a private industry research lab appears, from the tweets, to be one where the colleagues and higher-ups are not intimately involved in pushing paper submissions. It appears to be a case where the author in question is deciding whether or not to even bother publishing papers. Therefore, the politics of ignoring more-senior folks (if they exist) is unfamiliar. I can’t do much but read through the Tweet lines and assume this person is not risking annoying someone who is their boss. Obviously if someone in a boss-like status would be miffed, it is in your interest to find some way that they can make a contribution that is significant in your own understanding or to have a bloody discussion about it at the very least.

Leaving off the local politics, we can turn to the implications for your CV and the story of you as a scientist that it is going to tell.

If all you ever have are first-author publications it will look, to the modern eye, like you are non-collaborative, meaning not a team player. This is probably an impression you would like to avoid, yes, even within an industrial setting. But this is easy to minimize. I can’t set any hard and fast rules but if you have some solo-author and some multiple-author pubs sprinkled throughout your timeline, I can’t see this being a big deal. Particularly if your employment particulars do not demand a lot of pubs and, see above, the other people around you are not publishing. Eventually it would become clear that you are the one pushing publication so it isn’t weird to see solo-author works.

Consider, however, that you are possibly losing the opportunity to burnish your credentials. The current academic science arc has an expectation for first-author papers as a trainee (grad student, postdoc) which is then supposed to transition to last-author pubs as a scientific supervisory person (aka professor or PI). Industry, I surmise, can have a similar path whereby you start out as some sort of lowly Scientist and then transition to a Manager where you are supervising a team.

In both of these scenarios, academic and industry, looking like you are a team-organizing, synthetic force is good. Adding more authors can be helpful in creating this impression. Looking like you are the driving intellectual participant on a sub-area of science is good. This concern looks like it votes for thinning your authorship lines- after all, someone else in your group might start to leech credit away from you if they appear consistently or in a position (read: last author, co-contributing author) that implies they are more of the unifying intellectual driver.

This is where you need to actually think about your situation.

I tell trainees who are worried about being hosed out of that one deserved first-author position or being forced to accept a co-contributing second author this
; You are in for the long haul. If you are publishing multiple papers in this area of science (and you should be) then for the most part you will have first-authors and in the end analysis it will be clear that you are the consistent and most important participant. It will be a simple matter for your CV to communicate that you are the ONE. So it may not be worth sweating the small stuff on each contentious author issue.

In a related vein, it costs you little to be generous, particularly with middle authors that have next to no impact on your credit for this work.

If you only plan to publish one paper, obviously this changes the calculation.

Do you ever plan to make a push for management? Whether of the academic PI or industry variety, I think it is useful to lay down a record of being the leader of the team. That can mean being communicating author or being last author. At some point, even in industry, an ambitious scientist may wish to start being last author even under the above-mentioned scenario.

This is what brand new PIs have to do. Find someone, anyone to be the first author on pubs so that they can be the last author. This is absolutely necessary for the CV as a communication device. Undergrad volunteer? Rotation student? Summer intern? No problem, they can be the first author right? Their level of contribution is not really the issue. I can see an industry scientist that wants to start making a push for management doing something similar to this.

As always, I return to the concept that you have to do your own research within your own situation to figure out what the expectations are. Look at what most people like yourself, in your situation, tend to do. That’s your starting point. Then think about how your CV is going to look to people over the medium and long term. And make your authorship decisions accordingly.

Since many of you are AAAS members, as am I, I think you might be interested in an open letter blogged by Michael Balter, who identifies himself as “a Contributing Correspondent for Science and Adjunct Professor of Journalism at New York University“.

I have been writing continuously for Science for the past 24 years. I have been on the masthead of the journal for the past 21 years, serving in a variety of capacities ranging from staff writer to Contributing Correspondent (my current title.) I also spent 10 years as Science’s de facto Paris bureau chief. Thus it is particularly painful and sad for me to tell you that I will be taking a three-month leave of absence in protest of recent events at Science and within its publishing organization, the American Association for the Advancement of Science (AAAS).

sounds serious.

What’s up?

Yet in the case of the four women dismissed last month, no such explanation was made, nor even a formal announcement that they were gone. Instead, on September 25, Covey wrote a short email to Science staff telling us who the new contacts were for magazine makeup and magazine layout. No mention whatsoever was made of our terminated colleagues. As one fellow colleague expressed it to me: “Brr.”

Four staff dismissals that he blames on a newcomer to the organization.


I think that this collegial atmosphere continued to dominate until earlier this year, when the changes that we are currently living through began in earnest. Rob Covey came on board at AAAS in September 2013, and at first many of us thought that he was serving mostly in an advisory capacity; after all, he had a reputation for helping media outlets achieve their design and digital goals, a role he had played at National Geographic, Discovery Communications, and elsewhere. I count myself among those who were happy about many of the changes he brought about, including the redesign of the magazine, the ramping up of our multimedia presence, etc. But somewhere along the way Covey began to take on more power and more authority for personnel decisions, an evolution that has generated increasing consternation among the staff in all of Science’s departments.

New broom sweeps?

(In addition, according to all the information I have been able to gather about it, Covey was responsible for one of the most embarrassing recent episodes at Science, the July 11, 2014 cover of the special AIDS issue. This cover, for which Science has been widely excoriated, featured the bare legs [and no faces] of transgender sex workers in Jakarta, which many saw as a crass objectification and exploitation of these vulnerable individuals. Marcia McNutt was forced to publicly apologize for this cover, although she partly defended it as the result of “discussion by a large group.” In fact, my understanding, based on sources I consider reliable, is that a number of members of Science’s staff urged Covey not to use the cover, to no avail.)

Remember this little oopsie?

This will be interesting to watch, particularly if we hear more about the July 11 cover and any possible role that the individuals Balter references in this statement, “The recent dismissal of four women in our art and production departments“, had in the opposition or approval argument.

http://www.youtube.com/user/belinda243?v=0P5qlek08Wc

http://www.youtube.com/user/belinda243?v=0P5qlek08Wchttp://

Open Mic Night

April 18, 2013

I just had a brilliant idea. Which means that probably someone else has had it before.

Have you ever heard of someone going to an open-mic night at the coffeeshop and laying down a science presentation?

I am disturbingly captivated by the idea of whipping out laptop, projector and talking about some of our recent science at my local java joint…….

One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:

How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?

This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.

To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head…sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit…shouldn’t this be the primary motivation?

Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.

Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers…this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It’s a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their “own project” from more or less scratch until they got to the point of a substantial first-author effort. That’s fine and all but I’ve come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven’t written up yet. If I have something like this, I’ll encourage trainees to pick it up and massage it into a paper.

Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I’ve mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.

on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.

It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I’m never furthering manuscripts I said I’d take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I’ve said before it interacts with his or her motivation to work on your draft. I don’t mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.

One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn’t want to look stupid and this kept me from bouncing drafts off my PI as a trainee.

Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.

But as I often remind myself, when it is their first few papers, the trainees want their words in press. The way they wrote them.

__
*this stuff is not Shakespeare, I reject this out of hand

LPU Redoux

April 12, 2013

Another round of trying to get someone blustering about literature “clutter” and “signal to noise ratio” to really explain what he means.

Utter failure to gain clarity.

Again.

Update 1:

It isn’t as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don’t ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the “clutter”, “signal to noise”, “complete story” and “LPU=bad” dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that’s your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure’s worth of help. Some will merely sow confusion…but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That’s the job. That’s the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

UPDATE 2:
Thoughts on the Least Publishable Unit

LPU

Authors fail to illuminate the LPU issue

Better Living Through Least Publishable Units

Yet, publishing LPU’s clearly hasn’t harmed some prominent people. You wouldn’t be able to get a job today if you had a CV full of LPU’s and shingled papers, and you most likely wouldn’t get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don’t completely understand this phenomenon.

Avalanche of Useless Science

Our problem is an “Avalanche of Low Quality Research”? Really?

Too Many Papers

We had some incidental findings that we didn’t think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) “incidental” results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).

For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.

But here is my question…. How many of you are barraged by requests for reprints? That’s the way open access on the personal level has always worked. I use it myself to request things I can’t get to by the journal’s site. The response is always prompt from the communicating author.

Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say…more than one per week?

So are you? Are you getting this many requests?

Neuropolarbear has a post up suggesting that people presenting posters at scientific meetings should know how to give the short version of their poster.

My favorite time to see posters is 11:55 and 4:55, since then people are forced to keep it short.

If you are writing your poster talk right now, remember to use a stopwatch and make your 5 minute version 5 minutes.

Don’t even practice a longer version.

I have a suggestion.

Ask the person to tell you why they are there! Really, this is a several second exchange that can save a lot of time. For noobs, sure, maybe this is slightly embarrassing because it underlines that even if you have managed to scope out the name successfully you do not remember that this is some luminary in your subfield. Whatever. Suck it up and ask. It saves tremendous time.

If you are presenting rodent behavioral data and the person indicates that they know their way around an intravenous self-administration procedure, skip the methods! or just highlight where you’ve deviated critically from the expected paradigms. If they are some molecular douche who just stopped by because “THC” caught their eye then you may need to go into some detail about what sort of paradigms you are presenting.

Similarly if it is someone from the lab that just published a paper close to your findings, just jump straight to the data-chase. “This part of figure 2 totally explains what you just published”

Trust me, they will thank you.

As Neuropolarbear observes, if you’ve skipped something key, then this person will ask. Poster sessions are great that way.

Watch this video. If you are anything like me, you have essentially zero understanding of what this guy is talking about. To start with. It very rapidly devolves into technical jargon and insider references to things that I don’t really understand.

But you know what?

After awhile you probably kinda-sorta pick up on what is going on and can kinda-sorta understand what he’s telling his audience. I think I am impressed at that part.

Watching this through also makes you realize that a computer-geek presentation really doesn’t differ much from the talks we give in our science subfields. And if you skip through to the Q&A about two-thirds through, you’ll see that this part is familiar too.

I think I may just make this a training video for my scientific trainees.