http://www.youtube.com/user/belinda243?v=0P5qlek08Wc

http://www.youtube.com/user/belinda243?v=0P5qlek08Wchttp://

Open Mic Night

April 18, 2013

I just had a brilliant idea. Which means that probably someone else has had it before.

Have you ever heard of someone going to an open-mic night at the coffeeshop and laying down a science presentation?

I am disturbingly captivated by the idea of whipping out laptop, projector and talking about some of our recent science at my local java joint…….

One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:

How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?

This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.

To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head…sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit…shouldn’t this be the primary motivation?

Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.

Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers…this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It’s a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their “own project” from more or less scratch until they got to the point of a substantial first-author effort. That’s fine and all but I’ve come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven’t written up yet. If I have something like this, I’ll encourage trainees to pick it up and massage it into a paper.

Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I’ve mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.

on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.

It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I’m never furthering manuscripts I said I’d take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I’ve said before it interacts with his or her motivation to work on your draft. I don’t mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.

One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn’t want to look stupid and this kept me from bouncing drafts off my PI as a trainee.

Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.

But as I often remind myself, when it is their first few papers, the trainees want their words in press. The way they wrote them.

__
*this stuff is not Shakespeare, I reject this out of hand

LPU Redoux

April 12, 2013

Another round of trying to get someone blustering about literature “clutter” and “signal to noise ratio” to really explain what he means.

Utter failure to gain clarity.

Again.

Update 1:

It isn’t as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don’t ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the “clutter”, “signal to noise”, “complete story” and “LPU=bad” dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that’s your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure’s worth of help. Some will merely sow confusion…but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That’s the job. That’s the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

UPDATE 2:
Thoughts on the Least Publishable Unit

LPU

Authors fail to illuminate the LPU issue

Better Living Through Least Publishable Units

Yet, publishing LPU’s clearly hasn’t harmed some prominent people. You wouldn’t be able to get a job today if you had a CV full of LPU’s and shingled papers, and you most likely wouldn’t get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don’t completely understand this phenomenon.

Avalanche of Useless Science

Our problem is an “Avalanche of Low Quality Research”? Really?

Too Many Papers

We had some incidental findings that we didn’t think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) “incidental” results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).

For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.

But here is my question…. How many of you are barraged by requests for reprints? That’s the way open access on the personal level has always worked. I use it myself to request things I can’t get to by the journal’s site. The response is always prompt from the communicating author.

Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say…more than one per week?

So are you? Are you getting this many requests?

Neuropolarbear has a post up suggesting that people presenting posters at scientific meetings should know how to give the short version of their poster.

My favorite time to see posters is 11:55 and 4:55, since then people are forced to keep it short.

If you are writing your poster talk right now, remember to use a stopwatch and make your 5 minute version 5 minutes.

Don’t even practice a longer version.

I have a suggestion.

Ask the person to tell you why they are there! Really, this is a several second exchange that can save a lot of time. For noobs, sure, maybe this is slightly embarrassing because it underlines that even if you have managed to scope out the name successfully you do not remember that this is some luminary in your subfield. Whatever. Suck it up and ask. It saves tremendous time.

If you are presenting rodent behavioral data and the person indicates that they know their way around an intravenous self-administration procedure, skip the methods! or just highlight where you’ve deviated critically from the expected paradigms. If they are some molecular douche who just stopped by because “THC” caught their eye then you may need to go into some detail about what sort of paradigms you are presenting.

Similarly if it is someone from the lab that just published a paper close to your findings, just jump straight to the data-chase. “This part of figure 2 totally explains what you just published”

Trust me, they will thank you.

As Neuropolarbear observes, if you’ve skipped something key, then this person will ask. Poster sessions are great that way.

Watch this video. If you are anything like me, you have essentially zero understanding of what this guy is talking about. To start with. It very rapidly devolves into technical jargon and insider references to things that I don’t really understand.

But you know what?

After awhile you probably kinda-sorta pick up on what is going on and can kinda-sorta understand what he’s telling his audience. I think I am impressed at that part.

Watching this through also makes you realize that a computer-geek presentation really doesn’t differ much from the talks we give in our science subfields. And if you skip through to the Q&A about two-thirds through, you’ll see that this part is familiar too.

I think I may just make this a training video for my scientific trainees.

As the Impact Factor discussion has been percolating along (Stephen Curry, Björn Brembs, YHN) it has touched briefly on the core valuation of a scientific paper: Citations!

Coincidentally, a couple of twitter remarks today also reinforced the idea that what we are all really after is other people who cite our work.
Dr24hrs:

More people should cite my papers.

I totally agree. More people should cite my papers. Often.

AmasianV:

was a bit discouraged when a few papers were pub’ed recently that conceivably could have cited mine

Yep. I’ve had that feeling on occasion and it stings. Especially early in the career when you have relatively few publications to your name, it can feel like you haven’t really arrived yet until people are citing your work.

Before we get too far into this discussion, let us all pause and remember that all of the specifics of citation numbers, citation speed and citation practices are going to be very subfield dependent. Sometimes our best discussions are enhanced by dissecting these differences but let’s try not to act like nobody recognizes this, even though I’m going to do so for the balance of the post….

So, why might you not be getting cited and what can you do about it? (in no particular order)

1) Time. I dealt with this in a prior post on gaming the impact factor by having a lengthy pre-publication queue. The fact of the matter is that it takes a long time for a study that is primarily motivated by your paper to reach publication. As in, several years of time. So be patient.

2) Time (b). As pointed out by Odyssey, sometimes a paper that just appeared reached final draft status 1, 2 or more years ago and the authors have been fighting the publication process ever since. Sure, occasionally they’ll slip in a few new references when revising for yet the umpteenth time but this is limited.

3) Your paper doesn’t hit the sweet spot. Speaking for myself, my citation practices lean this way for any given point I’m trying to make. The first, best and most recent. Rationale’s vary and I would assume most of us can agree that the best, most comprehensive, most elegant and all around most scientifically awesome study is the primary citation. Opinions might vary on primacy but there is a profound sub-current that we must respect the first person to publish something. The most-recent is a nebulous concept because it is a moving target and might have little to do with scientific quality. But all else equal, the more recent citations should give the reader access to the front of the citation thread for the whole body of work. These three concerns are not etched in stone but they inform my citation practices substantially.

4) Journal identity. I don’t need to belabor this but suffice it to say some people cite based on the journal identity. This includes Impact Factor, citing papers on the journal to which one is submitting, citing journals thought important to the field, etc. If you didn’t happen to publish there but someone else did, you might be passed over.

5) Your paper actually sucks. Look, if you continually fail to get cited when you think you should have been mentioned, maybe your paper(s) just sucks. It is worth considering this. Not to contribute to Imposter Syndrome but if the field is telling you to up your game…up your game.

6) The other authors think your paper sucks (but it doesn’t). Water off a duck’s back, my friends. We all have our opinions about what makes for a good paper. What is interesting and what is not. That’s just the way it goes sometimes. Keep publishing.

7) Nobody knows you, your lab, etc. I know I talk about how anyone can find any paper in PubMed but we all need to remember this is a social business. Scientists cite people they know well, people they’ve just been chatting with at a poster session and people who have just visited for Departmental seminar. Your work is going to be cited more by people for whom you/it/your lab are most salient. Obviously, you can do something about this factor…get more visible!

8) Shenanigans (a): Sometimes the findings in your paper are, shall we say, inconvenient to the story the authors wish to tell about their data. Either they find it hard to fit it in (even though it is obvious to you) or they realize it compromises the story they wish to advance. Obviously this spans the spectrum from essentially benign to active misrepresentation. Can you really tell which it is? Worth getting angsty about? Rarely…..

9) Shenanigans (b): Sometimes people are motivated to screw you or your lab in some way. They may feel in competition with you and, nothing personal but they don’t want to extend any more credit to you than they have to. It happens, it is real. If you cite someone, then the person reading your paper might cite them. If you don’t, hey, maybe that person will miss it. Over time, this all contributes to reputation. Other times, you may be on the butt end of disagreements that took place years before. Maybe two people trained in a lab together 30 years ago and still hate each other. Maybe someone scooped someone back in the 80s. Maybe they perceived that a recent paper from your laboratory should have cited them and this is payback time.

10) Nobody knows you, your lab, etc II, electric boogaloo. Cite your own papers. Liberally. The natural way papers come to the attention of the right people is by pulling the threads. Read one paper and then collect all the cited works of interest. Read them and collect the works cited in that paper. Repeat. This is the essence of graduate school if you ask me. And it is a staple behavior of any decent scientist. You pull the threads. So consequently, you need to include all the thread-ends in as many of your own papers as possible. If you don’t, why should anyone else? Who else is most motivated to cite your work? Who is most likely to be working on related studies? And if you can’t find a place for a citation….

When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as “acceptable for publication”.

If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal’s standard.

If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.

We most recently took up the issue of the Least Publishable Unit of science in the wake of a discussion about first authorships (although I’ve been talking about it on blog for some time). In that context, the benefit of having more, rather than fewer, papers emerging from a given laboratory group is that individual trainees have more chance of getting a first-author slot. Or they get more of them. This is highly important in a world where the first-author publications on the CV loom so large. Huge in fact.

I’ve also alluded to the fact that LPU tendencies are a benefit to the conduct of science (as a group enterprise) because it allows the faster communication of results, the inclusion of more methodological detail (critical for replication and extension) and potentially the inclusion of more negative outcomes (which saves the group time).

I have also staked my claim that in an era when most of us find, sort and organize literature with search engine tools from our desktop computers, the “costs” of the LPU approach are minimal.

The recent APS Observer reprinted a column in the NYT that I’d originally missed entitled The Perils of ‘Bite sized’ Science” (MARCO BERTAMINI and MARCUS R. MUNAFÒ; Published: January 28, 2012 ). Woot! No offense, commentariat, but you’ve done a dismal job so far of making an argument for why the LPU approach is so bad or detrimental to the conduct of science, particularly in response to my reasons. So I was really stoked to see this, in hopes of gaining some insight. I was sadly disappointed. Read the rest of this entry »

In the previous post on journal publishing, I observed that sub-sub-specialty journals were an anachronism of the era prior to the establishment of nearly comprehensive search engines and databases like PubMed. In that era, dividing the monthly output of scientific papers into journals made sense. First of all, it would be pretty hard to pick up a monthly issue of “The Omnibus Journal of Biomedical Science”. Second, it would be unduly laborious (and paper cut-y) to keep flipping around from some index or TOC to the abstracts you wanted to scan. So there were certain physical realities driving journal specialization.

Not to mention the fact that across the decades from 1886 to 1996 (PubMed established) there was a gradual and sustained addition of sub-specialty societies, narrower and narrower subfields of interest and an all around expansion of academic science. This came with a desire for yet another group of scientists to have selection of the studies they most wanted to read into a smaller number of journals.

I am not privy to all the details of the history of journals in academic science. Not even close. But what I do know is that a publisher such as Elsevier has a metric boatload of small sub-specialty journals at present time. Many of which are tied to an academic society. They continue to launch NEW ones. (Phew, I’m already link exhausted- Google “Official Journal Elsevier” and see what you get. The list is enormous.)

It is, or has been, in the interests of both Elsevier and the academic society to continue this arrangement. Occasionally societies will switch publishers. For example, Neuropsychopharmacology jumped from Elsevier to Nature Publishing Group in recent memory**. Occasionally you’ll be looking at the online site for a journal and notice a truncation in the archive..and have to Google around to figure out who used to publish the journal. Nevertheless, it is clear that Elsevier thinks these arrangements are good ones. Presumably because they get good return from libraries when they bundle a bunch of journals into a fixed price menu.

[Sidebar: This is a bit of a fly in the ointment, btw. One thing I do laud the publishers for is when they’ve taken effort to PDF all of their back catalog…back to vol 1, issue 1 in the dark ages in some cases. When there’s a shift in the publisher that took place prior to the online age it seems to me that their motivation for putting up a back catalog for a journal they no longer publish is not very high.]

What do the societies get in return?

I am, shall we say*, somewhat informed about moves by at least two society level journals to switch their default member subscription from print to online. The response seemed to be overwhelming approval and lack of opting-for-print amongst the memberships. No surprise, almost all of us are complete and total converts to the benefits of online access to journal articles and personal PDF archives on our computers. Yes, even the rapidly emeritizing cohort. Still, it is nice to see the data, so to speak. Nice to see that if a society stops sending print issues to clog up faculty bookshelves collecting dust, nobody objects.

But……ego. Somehow I bet the existing societies would get their backs up a little bit if there was a suggestion that they simply give up their journal. Neuropolarbear asked what could be done about the assy position being taken by some publishers on the Research Works Act issue. This is the one trying to reverse the law demanding the deposit of all NIH funded papers into PubMedCentral (in peer-reviewed, accepted, manuscript form).

One thing we could do is to demand our society journals stop working with the jerky publishers.

This thought is what brought up all the above blathering. It is very likely that each and every small journal couldn’t make it on their own. Well, duh, of course not. As noted by the irrepressible Comrade Physioproffe

From what I understand, the other issue moneywise is that big publishers like Elsevier force institutions to pay subscription fees for shitteasse journals that no one reads by bundling them with their flagship journals. Those journals wouldn’t even exist if they had to survive on their own submission/publication fees.

But if all we’re talking about is a sort of virtual journal…why can’t some other umbrella journal publisher just kind of take up the slack? Why couldn’t a PLoS ONE type of outfit agree to provide all the publishing services and put some sort of tag on the article to group by academic society?

_
*christ that was priggish, wasn’t it?

**fascinating. In the case of Neuropsychopharmacology, the entire back catalog was transferred over to NPG so if you click on an article that your print copy insists was published by Elsevier, boom, you end up at NPG.

Websearch your CongressCritter and navigate to the email / reply form. Then give him or her an earful (eyeful) about the attempt by Reps Maloney and Issa to discontinue the requirement for public funded science to be made publicly available (by the Omnibus Appropriation passed in Mar 2009).

Please. Put your Critter on alert that this is bad legislation that is bad for taxpayers. Additional detail is after the jump. Read the rest of this entry »

I am greatly enjoying reading this measured takedown

For example, the article on foxnews.com states, “Grad students often co-author scientific papers to help with the laborious task of writing. Such papers are rarely the cornerstone for trillions of dollars worth of government climate funding, however — nor do they win Nobel Peace prizes.” I will assume that the bit about “Nobel Peace prizes” was a mistake made by the Fox News writer, since as I’m sure you’re aware, scientific achievements do not lead to Peace prizes. Further, most science of any kind doesn’t lead to a Nobel Prize. They really don’t hand out that many of them.

But let’s de-construct this one a little more. Grad students often are the lead author on scientific publications, because they carried out the work. I know you feel that this shouldn’t be the case. How can they do science without a Ph.D?! Well, it turns out that’s how you get a Ph.D. By doing research that leads to publications.

of this variety of ignorant mewling about the conduct of science.

“We’ve been told for the past two decades that ‘the Climate Bible’ was written by the world’s foremost experts,” Canadian journalist Donna Laframboise told FoxNews.com. “But the fact is, you are just not qualified without a doctorate. In academia you aren’t even on the radar at that point.”

In academia, the people who are “on the radar” for any given topic are those who are most directly and deeply involved in the work. Sometimes that breadth and depth comes from a longer career in the field. Sometimes it comes because as a grad student you have done nothing else other than focus exclusively, think deeply and read exhaustively on a given topic. Ultimately, those who should be listened to most are those that know the most.

Academic credentials can be the marker, but are no substitute, for expertise.

Citation

May 26, 2011

How often do you cite a paper for the overall, Gestalt thrust of the story? For the whole picture?

How frequently do you cite a paper for only a figure or two out of the whole thing? Or for a method?

What does this tell you about the notion that there is such a thing as a meaningful standard of a “complete story”?

Dr Becca has a post up in which she ponders a perennial issue for newly established labs….and many other labs as well.

The gist is that which journal you manage to get your work published in is absolutely a career concern. Absolutely. For any newcomers to the academic publishing game that stumbled on this post, suffice it to say that there are many journal ranking systems. These range from the formal to the generally-accepted to the highly personal. Scientists, being the people that they are, tend to take shortcuts when evaluating the quality of someone else’s work, particularly once it ranges afield from the highly specific disciplines which the reviewing individual inhabits. One such shortcut is inferring something about the quality of a particular academic paper by knowledge of the reputation of the journal in which it is published.

One is also judged, however, by the rate at which one publishes and, correspondingly, the total number of publications given a particular career status.

Generally speaking there will be an inverse correlation between rate (or total number) and the status of the journals in which the manuscripts are published.

This is for many reasons, ranging from the fact that a higher-profile work is (generally) going to require more work. More time spent in the lab. More experiments. More analysis. More people’s expertise. Also from the fact that the manuscript may need to be submitted to more higher-profile journals (in sequence, never simultaneously), on average, to get accepted then to get picked up by so-called lesser journals.

This negative correlation of profile/reputation with publishing rate is Dr Becca’s issue of the day. When to keep bashing your head against the “high profile journal” wall and when to decide that the goal of “just getting it published” somewhere/anywhere* takes priority.

I am one who advises balance. The balance that says “don’t bet the entire farm” on unknowables like GlamourMag acceptance. The balance that says to make sure a certain minimum publication rate is obtained. And for a newly transitioning scientist, I think that “at least one pub per year” needs to be the target. And I mean, per year, in print, pulled up in PubMed for that publishing year. Not an average, if you can help it. Not Epub in 2011, print in 2012. Again, if you can help it.

The target. This is not necessarily going to be sufficient…and in some cases a gap of a year or two can be okay. But I think this is a good general rubric for triaging your submission strategy.

It isn’t that one C/N/S pub won’t trump a sustained pub rate and a half-dozen society level publications. It will. The problem is that it is a far from certain outcome. So if you end up with a three year publication gap, no C/N/S pubs and you end up dumping the data in a half-dozen society level journal pubs anyway…well, in grant-getting and tenure-awarding terms, a 2-3 year publication gap with “yeah but NOW we’re submitting this stuff to dump journals like wild fire so all, good, k?” just isn’t smart.

My advice is to take care of business first, get that 1-2 pub per year in bare minimum or halfway decent journals track going, and then to think about layering high-profile risky business on top of that.

Dang, I got all distracted. What I really meant to blog about was a certain type of comment popping up in Dr. Becca’s thread.

The kind of comment that I think pushes the commenter’s pet agenda, vis a vis academic publishing, over what is actually good advice for someone that is newly transitioned to an independent laboratory position. I have my own issues when it comes to this stuff. I think the reification of IF and the pursuit of GlamorMag publication is absolutely ruining the pursuit of knowledge and academic science.

But it is absolutely foolish and bad mentoring to ignore the realities of our careers and the judging of our talents and accomplishments. I’d rather nobody *ever* submitted to journal solely because of the journal’s reputation. I long for the end of each and every academic journal in which the editors are anything other than actual working scientists. The professional journal “editors” will be, as they say, the first against the wall come the revolution in my glorious future. Etc.

But you would never catch me telling someone in Dr. Becca’s position that she should just ignore IF and journal status and publish everything in the easiest venue to get accepted. Never.

You wackaloon Open Access Nazdrul and followers need to dissociate your theology from your advice giving.
__
*there are minimum standards. “Peer Reviewed” is one such standard. I would argue that “indexed in PubMed” (or your relevant major database) is another such. Also, my arbitrary sub-field snobbery** starts at an Impact Factor of around 1.something…..however I notice that the IF of my touchstone journals for “the bottom” have inched up over the years. Perhaps “2” is my lower bound now.

**see? for some fields this is snobbery. for others, a ridiculous, snarky statement. Are you getting the message yet?