A Skeptic Over Coffee #1: Starter Kit

IMG_20150516_193519

It takes effort and maintained vigilance to become an effective skeptic, with the penetrating mental focus to cut through the misleading. Honing one’s questioning acuity means hardening one’s mental defenses against charlatans, fraudsters, and the merely incompetent in all walks of life. With practice it’s possible to be the infamous “Reviewer Number 3” who gradually gets fewer and fewer invitations to provide peer-review for “paradigm shifting” articles from editors of high-impact journals. It may seem like a grandiose dream, but you too can in fact be the colleague who corrects the university press office’s outlandish claims about their own paper, causing their tenure review to be shelfed for another year (for failure to be interviewed on Science Friday</a<). If this glamourous lifestyle of modest claims and bold negations sounds appealing, read on!

I invite you to join me every once in a while to practice skepticism in these short segments designed to provide about one coffee's worth of skeptical inquiry. My day job pushing things around with lasers both takes a lot of time and requires that I drink a tremendous amount of coffee, so the concise aSOC format should fit right in with my new lab-monkey lifestyle.

Here is your Beginning Skeptics’ reading list:

  • A seminal paper by John Ioannidis runs the numbers on an over-abundance of false-positives in the scientific literature.
    John P.A. Ionnidis. Why Most Published Research Findings Are False. PLOS. (2005). DOI: 10.1371/journal.pmed.0020124

  • Retraction Watch is an important resource for any skeptic. If someone consistently publishes retractable articles and no one notices, does anyone lose their scientist licence?
  • Jeffrey Beall runs black lists of predatory publishers and journals taking advantage of pay-for-publish open access models atScholarly Open Access. Also consider John Bohannon’s misleading report generalising predatory practices by OA publishers and ensuing criticism of his approach.
  • And remember your statistics:
    http://xkcd.com/882/
    Why it Always Pays to Think Twice About Your Statistics
    An investigation of the false discovery rate and the misinterpretation of p-values

  • UPDATE: Recent, interesting consideration of widespread inflation of scientific results.
    Megan L. Head, Luke Holman, Rob Lanfear, Andrew T. Kahn, Michael D. Jennions.
    The Extent and Consequences of P-Hacking in Science.
    PLOS. (2015) DOI: 10.1371/journal.pbio.1002106
  • Is the future of scientific publishing in-house open access?

    WeirdFuture
    Photo from flickr user Tom Marxchivist, 1952 cover by Basil Wolverton, used under CC attribution license.

    Those of you that frequent theScinder know that I am pretty passionate about how science is disseminated, and you have probably noticed that, like their brethren in newsprint and magazine before them, the big-name publishers don’t know exactly how to react to a changing future, and despite what traditional publishers would have you believe, they are not immune to publishing tripe.

    Nature may be butting heads with Duke University over requesting waivers for the open access policy in place there. Apparently the waiver request isn’t even necessarily based on the practical implementation of Duke’s open access policy (Nature allows articles to be made freely available in their final version 6 months after publication), but it does raise the question: how much hassle will universities and their faculty put up with before they take matters into their own hands? As MIT’s Samuel Gershman points out, modern publishing doesn’t cost all that much. Even the fairly exorbitant fees charged to authors by the “gold standard” of open access publishers may be a transient relic of the conventional (turning archaic?) publishing business model. This provides incentive for predatory publishing (as discussed in this article at The Scientist and the basis for the Bohannon article published in Science last October) But if peer review and editing is largely volunteer labour, performed as an essential component of the role of a researcher and with the bill largely footed as a public expenditure, why keep paying enormous subscription fees for traditional publishing? If the trend catches on, as it almost certainly will, leading institutions will continue to adopt open access policies and libraries will see less and less reason to keep paying for outdated subscriptions.

    Relevant links:

    Scholarly Publishing: Where is Plan B?

    California univerisity system consider boycotting Nature Publishing Group

    Samuel Gershman’s ideal publishing model, the Journal of Machine Learning Research

    Computer algorithm has more papers than you do!

    UnclesamStopPubFakePapers

    Oh man, oh man.

    Via Retraction Watch, I just learned that Cyril Labbé of Joseph Fourier University has found more than 120 published fake papers written by the algorithm known as SCIgen. That’s >100 by IEEE and 16 by Springer according to the nature article by Richard Van Noorden.These are mostly conferenced proceedings-but they’re purportedly peer reviewed.

    You can make your own fake paper too. Here’s ours: Geld: A Methodology for the Improvement of Scatter/Gather I/O

    Remember when John Bohannon wrote a somewhat misleading attack on the open access publishing model in Science? It seems traditional publishing has their own misgivings about peer review.

    From Van Noorden’s report:

    Labbé says that the latest discovery is merely one symptom of a “spamming war started at the heart of science” in which researchers feel pressured to rush out papers to publish as much as possible.

    Indeed.

    Related links
    Ike Antkare one of the great stars in the scientific firmament

    Uncle Sam poster modified from Wikipedia source.

    Papers published begets more papers published

    So what?

    pubOrPerish

    In a recent article first-authored by William Laurance researchers report that, rather unremarkably, publishing more papers before receiving a PhD predicts that an individual will have a more successful career in research, measured solely by publication frequency. They also considered first language, gender precociousness of first article, and university prestige. If publication frequency before attaining the PhD is the best predictor of career publication frequency, just how good is it? They report an r2 value of about 0.14 for the best model incorporating pre-PhD publications, with models lacking this predictor faring much worse.

    Wait, what?

    If I have a model that only explains 14% of the deviance of the data, well, I think it is time to find a new model. When they included the first three years immediately following attaining a PhD, the r2 value jumped to 0.29 for publications alone, and slightly better when the model includes one or more of the other predictors. Better, but still pretty pathetic. If you are hiring people with a 29% rate of picking the right candidate based on some metric of success chances are you won’t be in charge of hiring for long. The paper only looked at the first ten years immediately following the PhD degree, so including the first three years is a bit like predicting rain when you are already wet. Why were the models so miserable? The range of publication frequency over the first ten years was pretty wide, from 0 to 87 papers published. On top of that, their sample consisted only of individuals who had managed to land a university faculty job. That’s right, one or more of these scientists landed a tenure-track position with zero publications. Jealous?

    The sample selection is a pretty major flaw of the paper, in my opinion. The scientists surveyed were all on one rung or another of the assistant/associate/full professor ladder, which is to say that everyone they considered were extremely high acheivers among the total population of people holding biology PhDs. The rate of biology PhDs attaining faculty positions six years post-degree has dropped from 55% in 1973, to 15% in 2006 [1]. Since their data only represented successful academics, their models had no chance of predicting which individuals would drop out of research altogether as opposed to going on to become a principal investigator. Predicting whether an individual is able and willing to continue in science research would be a lot more telling than whether they published 2 versus 10 articles per year their first decade out of grad school.

    Using publication frequency as the sole measure of success is certainly rife with limitations (though they do mention a close correlative agreement with h-index). What about quality? What about real, meaningful, contributions to the field? What about retractions? I would be much more interested in a model that could predict whether a researcher would have to withdraw an article during their career than how many articles they might generate. Hopefully with a bit better r2 than 0.14, though.

    Publication is often referred to as the “currency” of academia. Well I’d like to posit that this currency is purely fiat. If inflation continues as it has been doing [2], the rate of fraudulent papers can only increase [3]. In my estimation, 300 papers with 3 retractions is worth a lot less than a “measly” 30 papers total. The commonplace occurrence of papers that must be withdrawn (not to mention fraudulent papers never outed, frivolous claims and tenuous conclusions) has broader implications beyond an individual’s career or a journal’s bottom line. When bad science becomes the new normal, public trust deteriorates, and anti-science sentiments thrive.

    The authors of the paper did have what I would consider a good take-home: faced with two applicants, one with a PhD from a prestigious university and the other from a lesser-known institution, pick the one with the better publication record. I would go one further and encourage hiring decisions to be informed by actually reading the papers. And vet the sources in these papers’ references. It’s not too hard, and if your job description includes hiring new talent, it’s your job. ‘A’s hire ‘A’s, and ‘B’s hire ‘C’s. Don’t be a ‘B,’ Science (with a capital ‘S’) depends on it.

    Laurance et al Predicting Publication Success for Biologists Bioscience Oct. 2013

    via conservation bytes