If you want to find out if a digital nematode is alive, try asking it.

Fancy living in a computer? Contributors to the OpenWorm project aim to make life inside a computer a (virtual) reality. In recent years, various brain projects have focused funding on moonshot science initiatives to map, model and ultimately understand the human brain: the computer that helps humans to cognito that they sum. These are similar in feel to the human genome project of the late 1990s and early 2000s. Despite the inherent contradictions of the oft-trotted trope that the human brain is the “most complex thing in the universe,” it is indeed quite a complicated machine, decidedly more complex than the human genome. Understanding how it works will take more than mapping every connection, which is akin to knowing every node in a circuit but having no idea what each component is. A multivalent approach at the levels of cells, circuits, connections, and mind offers the most complete picture. OpenWorm coordinator Stephen Larson et al. aim to start by understanding something a little bit simpler: the determinate 304 neuron brain and accompanying body of Caenorhabditis elegans, a soil-dwelling nematode worm that has served as a workhorse in biology for decades.

Genome, Brain

The connectome, a neural wiring diagram of the worm’s brain, has been mapped. The simulation of the worm at the cellular level is an ongoing open-source software program. The first human genome was sequenced only 3 years after the first C. elegans genome, a similar pace for full biological simulation in silico would mean that digital humans, or a reasonable facsimile, are possible within our lifetimes. At the point when these simulations of people are able to fool observers will these entities be alive and conscious? Have rights? Pay taxes? If a digital person claims the validity of their own consciousness should we take their word for it, or determine some metric for ascertaining the consciousness of a simulated person based on our own inspection? For answers to questions of existence and sapience we can turn to our own experience (believing as we do that we are conscious entities), and the venerable history of the questions as discussed in science fiction.

Conversation with the chatbot (a conversational precursor to intelligent software)CleverBot from 2014 December 24.

In the so-called golden age of science fiction characters tended to be smart, talented, and capable. Aside from an unnerving lack of faults and weakness, overall the protagonists were fundamentally human. The main difference between the audience and the actors in these stories was access to better technology. But it may be that this vision of a human future is comically (tragically?) myopic. Even our biology has been changing more quickly as civilisation and technologies develop. If we add a rate of technological advance that challenges the best-educated humans to keep pace, a speed-up of the rate of change in average meteorological variables, and human-driven selective pressure, the next century should be interesting to say the least. When those unobtainyl transferase pills for longevity finally kick in, generational turnover can no longer be counted on to ease adaptation to a step-change in civilisation.

Greg Egan (who may or may not be a computer program) has been writing about software-based people for over two decades. When the mind of a human is not limited to run on a single instance of its native hardware, new concepts such as “local death” and traveling by transmission emerge intrinsically. Most of the characters in novels from writers such as Egan waste little time questioning whether they will still exist if they have to resort to a backup copy of themselves. As in flesh-and-blood humans, persistence of memory plays a key role in the sense of self, but is not nearly so limited. If a software person splits themselves to pursue two avenues of interest, they may combine their experiences upon their reunion, rejoining as a single instance with a transiently bifurcated path. If the two instances of a single person disagree as to their sameness, they may decide to go on as two different people. These simulated people would be unlikely to care (beyond their inevitable battle for civil rights) whether you consider them to be alive and sapient or not, any more so than the reader is likely to disbelieve their own sapience.

Many of the thought experiments associated with software-based person-hood are prompted by a human perception of dubiousness in duplicity: two instances of a person existing at the same time, but not sharing a single experience, don’t feel like the same person. Perhaps as the OpenWorm project develops we can watch carefully for signs of animosity and existential crises among a population of digital C. elegans twinned from the same starting material. We (or our impostorous digital doppelgängers, depending on your perspective) may find out for ourselves what this feels like sooner than we think.

2014-12-29 – Leading comic edited for improved comedic effect

Why it always pays (95% C.I.) to think twice about your statistics


The northern hemisphere has just about reached its maximum tilt away from the sun, which means many academics will soon get a few days or weeks off to . . . revise statistics! Winter holidays are the perfect time to sit back, relax, take a fresh introspective at the research you may have been doing (and that which you haven’t) and catch up on all that work you were too distracted by work to do. It is a great time to think about the statistical methods in common use in your field and what they actually mean about the claims being made. Perhaps an unusual dedication to statistical rigour will help you become a stellar researcher, a beacon to others in your discipline. Perhaps it will just turn you into a vengefully cynical reviewer. At the least it should help you to make a fool of yourself ever-so-slightly less often.

First test your humor (description follows in case you prefer a mundane account to a hilarious webcomic):

In the piece linked above, Randall Munroe highlights the low threshold for reporting significant results in much of science (particularly biomedical research) and specifically the way these uncertain results are over and mis-reported in the lay press. The premise is that researchers perform experiments to determine whether jelly beans of 20 different colours have anything to do with acne. After setting their p-value threshold at 0.05, they find in one of the 20 experiments that there is a statistically significant association between green jelly beans and acne. I would consider the humour response to this webcomic a good first-hurdle metric if I were a PI interviewing applicants for new students/post-docs.

In Munroe’s comic, the assumption is that jelly beans never have anything to do with acne and that 100% of the statistically significant results are due to chance. Assuming that all of the other results were also reported in the literature somewhere (although not likely to be picked up by the sensationalist press), this would give the proportion of reported results that fail to reflect reality at an intuitive and moderately acceptable 0.05, or 5%.
Let us instead consider a slightly more lab-relevant version:

Consider a situation where some jelly beans do have some relationship to the medical condition of interest, say 1 in 100 jelly bean variants are actually associated in some way with acne. Let us also swap small molecules for jelly beans, and cancer for acne, and use the same p-value threshold of 0.05. We are unlikely to report negative results where the small molecule has no relationship to the condition. We test 10000 different compounds for some change in a cancer phenotype in vitro.

Physicists may generally wait for 3-6 sigmas of significance before scheduling a press release, but for biologists publishing papers the typical p-value threshold is 0.05. If we use this threshold and perform our experiment and go directly to press with the statistically significant results of the experiment, 83.9% of our reported positive findings will be wrong. In the press, a 0.05 p-value will often be interpreted as “only 5% chance of being wrong.” This is certainly not what we see here, but after some thought the error rate is expected and fairly intuitive. Allow me to illustrate with numbers.

As expected from the conditions of the thought experiment 1%, or 100 compounds, of these have a real effect. Setting our p-value at the widely accepted 0.05, we will also uncover purely by chance non-existent relationships between 495 (0.05 * 99000 with no effect) of the compounds and our cancer phenotype of interest. If we assume that the probability of failing to detect a real effect due to chance are complementary to detecting a fake effect, we will pick up 95 of the 100 actual cases we are interested in. Our total positive results will be 495 + 95 = 590, but only 95 of those reflect a real association. 495/590, or about 83.9%, will be false positives.

Such is the premise of a short and interesting write-up by David Calquhoun on false discovery rates [2]. The emphasis is on biological research because that is where the problem is most visible, but the considerations discussed should be of interest to anyone conducting research. On the other hand, let us remember that confidence due to technical replicates does not generally translate to confidence in a description of reality, e.g. the statistical confidence in the data from the now-infamous faster-than-light neutrinos from the OPERA detector ( was very high, but the source of the anomaly was instrumentation and two top figures from the project eventually resigned after overzealous press coverage pushed the experiment into the limelight. Paul Blainey et al. discuss the importance of considering the effect of technical and biological (or more generally, experimentally relevant) replicates in a recent Nature Methods commentary [3].

I hope the above illustrates my thought that a conscientious awareness of the common pitfalls in one’s own field, as well as those one closely interacts, is important for slogging through the avalanche of results published every day and for producing brilliant work of one’s own. This requires continued effort in addition to an early general study of statistics, but I would suggest it is worth it. To quote [2] “In order to avoid making a fool of yourself you need to know how often you are right when you declare a result to be significant, and how often you are wrong.”


[1]Munroe, Randall. Significant. XKCD.

[2] Colquhoun, David. An investigation of the false discovery rate and the misinterpretation of p-values. DOI: 10.1098/rsos.140216. Royal Society Open Science. Published 19 November 2014.

[3] Blainey, Paul, Krzywinski, Martin, Altman, Naomi. Points of Significance: Replication. Nat Meth (2014) 11.9 879-880.


Philaephilia n. Temporary obsession with logistically important and risky stage of scientific endeavour and cometary rendezvous.

Don’t worry, the condition is entirely transient

Rivalling the 7 minutes of terror as NASA’s Curiosity rover entered the Martian atmosphere, Philae’s descent onto comet 67P/Churyumov-Gerasimenko Wednesday as part of the European Space Agency’s Rosetta mission had the world excited about space again.

Comets don’t have the classic appeal of planets like Mars. The high visibility of Mars missions and moon shots has roots in visions of a Mars covered in seasonal vegetation and full of sexy humans dressed in scraps of leather, and little else. But comets may be much better targets in terms of the scientific benefits. Comets are thought to have added water to early Earth, after the young sun had blasted the substance out to the far reaches of the solar system beyond the realm of the rocky planets. Of course, comets are also of interest for pure novelty: until Philae, humans had never put a machine down on a comet gently. Now the feat has been accomplished three times, albeit a bit awkwardly, with all science instruments surviving two slow bounces and an unplanned landing site. Unfortunate that Philae is limited to only 1.5 hours of sunlight per 12 hour day, but there is some possibility that a last-minute attitude adjustment may have arranged the solar panels a bit more fortuitously.

So if Rosetta’s Philae lander bounced twice, rather than grappling the surface as intended, and landed in a wayward orientation where its solar panels are limited to only 12.5% of nominal sun exposure, how is the mission considered a success?

Most likely, the full significance of the data relayed from Philae via Rosetta will take several months of analysis to uncover. Perhaps some of the experiments will be wholly inconclusive and observational, neither confirming nor denying hypotheses of characteristic structure of comets. For example, it seems unlikely that the MUPUS instrument (i.e. cosmic drill) managed to penetrate a meaningful distance into the comet, and we probably won’t gain much insight concerning the top layers of a comet beyond perhaps a centimetre or so. In contrast, CONSERT may yield unprecedented observations about the interior makeup of a comet.

In science, failures and negative findings are certainly more conclusive, and arguably more preferable, than so-called positive results, despite the selective pressure for the latter in science careers and the lay press. An exception disproves the rule, but a finding in agreement with theory merely “fails to negate” said theory. For example, we now know better than to use nitrocellulose as a vacuum propellant. Lesson learned on that front.

In addition to a something-divided-by-nothing fold increase in knowledge about the specific scenario of attempting a soft landing on a comet, I’d suggest we now know a bit more about the value of autonomy in expeditions where the beck-and-call from mission control to operations obviates real time feedback. Perhaps if Philae had been optimised for adaptability, it would have been able to maintain orientation to the comet surface and give Rosetta and scientists at home a better idea of its (final) resting place after detecting that the touchdown and grapple didn’t go through. Space science is necessarily cautious, but adaptive neural networks and other alternative avenues may prove useful in future missions.

I’ll eagerly await the aftermath, when the experimental and the telemetry data have been further analysed. The kind of space mission where a landing sequence can omit a major step and still have operational success of all scientific instruments on board is the kind of mission that space agencies should focus on. The Rosetta/Philae mission combined key elements of novelty (first soft landing and persistent orbiting of a comet) low cost (comparable to a fewspace shuttle missions), and robustness (grapples didn’t fire, comet bounced and got lost, science still occurred). Perhaps we’ll see continued ventures from international space agencies into novel, science-driven expeditions. Remember, the first scientist on the moon was on the (so far) final manned mission to Luna. Missions in the style of Rosetta may be more effective and valuable on all three of the above points, and are definitely more fundamental in terms of science achieved, than continuous returns to Mars and pushes for manned missions. In a perfect world where space agencies operate in a non-zero sum funding situation along with all the other major challenges faced by human society, we would pursue them all. But realistically, Philae has shown that not only do alternative missions potentially offer more for us to learn in terms ofscience and engineering, but can also enrapture the population in a transcendent endeavour. Don’t stop following the clever madness of humans pursuing their fundamental nature of exploring the universe they live in.

The advantages of parametric design

I work primarily in OpenSCAD when making designs for 3D printing (and 2D designs for lasercutting). This means that instead of a WYSIWYG interface based primarily on using the mouse, my designs are all scripted in a programming language that looks a lot like C. This might seem a bit more difficult at first (and it is certainly less than ideal for some situations) but it makes for a pretty simple way to generate repetitive structural elements in basic flow control, i.e. for loops. Even more important, it means that I can substantially change a design by modifying the variable values passed to a function (called modules in OpenSCAD). For the sake of an example, take Lieberkühn reflectors for macrophotography. Lieberkühn reflectors are a classic illumination technique that have mostly fallen out of style in favour of more modern illumination such as LED or fibre-based lighting, but remains quite elegant and offers a few unique advantages. I have been working with these in conjunction with a few different lenses, and mostly with the help of a macro bellows. The bellows makes for variable working distances as well as magnifications, so the focus of one Lieberkühn will be the most effective only within a narrow range of macro-bellows lengths. Parametric designs such as the ones I create and work with in OpenSCAD allow me to change attributes such as the nominal working distance without starting each design from scratch. For example:


35mm Lieberkühn focus


30mm Lieberkühn focus


25mm Lieberkühn focus


20mm Lieberkühn focus

This approach has proven highly useful for me in terms of both creating highly customisable design and iterating to get fit just right. I’ll post results of my latest exploration of Lieberkühn reflectors soon after I receive the latest realisation in Shapeways bronzed steel.

Have we really lost 52% of the world’s animals?

The methods used by the LPI should not be accepted without reservation

Turning a critical eye on the 2014 Living Planet Report.

WWF’s Living Planet Report (LPR) 2014 has been making headlines because of its alarming claim that population sizes of mammals, birds, reptiles, amphibians and fish have dropped by half since 1970. The report reached this stark (and widely shared) conclusion via the Living Planet Index (LPI) a “measure of the state of the world’s biological diversity based on population trends of vertebrate species from terrestrial, freshwater and marine habitats” developed by scientists at WWF and the Zoological Society of London (ZSL). The LPI was adopted by the Convention on Biological Diversity (CBD) as a progress indicator for its 2020 goal to “take effective and urgent action to halt the loss of biodiversity”, which sadly (but unsurprisingly) appears to be failing.

In the previous edition of the LPR published two years ago, the drop in vertebrate numbers was estimated to be 30%. Now the scientists behind the LPI claim to have improved the method, resulting in a much greater decrease (52%) than previously reported. But the methodology is still highly controversial.

The team estimated trends in 10,380 populations of 3,038 mammal, bird, reptile, amphibian and fish species using 2,337 data sources including published scientific literature, online databases, and grey literature. The data used in constructing the index are time series of either population size, density, abundance or a “proxy of abundance”, e.g. bird nest density when there were no bird counts available.

The collection and analyses of these data represent an enormous amount of work and the team responsible deserves praise for undertaking this huge project and for creating an urgent call to action for wildlife conservation. However, we need to bear in mind that this dramatic “halving” of the word’s vertebrates is a grotesque oversimplification of biodiversity loss. The diversity of data sources and types used, the variability in data quality, as well as the uncertainty behind many of the population trend estimates mean that the LPI is probably not very reliable.

Additionally, the 3,038 species included in the analyses represent only 4.8% of the world’s 62,839 described vertebrate species. (The report entirely omits invertebrates, which are often cornerstone species and vastly outnumber all vertebrate animals). Following criticism on the methodology of previous LPIs, this year the LPI team used the estimated number of species in different taxonomic groups and biogeographic areas to apply weightings to the data. This means that the population trend of a particular taxonomic group becomes more important if the group comprises a large number of species, whereas the population trend of a species-poor taxon is allocated considerably less weight. To illustrate this, let us consider fishes, which in the LPI analysis represent the largest proportion of vertebrate species in almost all biogeographic areas and therefore carry the most weight. My guess is that the fish species whose population trends are sufficiently documented to be included in the analysis are most often in serious decline, because well-studied species are usually those that are either overharvested or frequent victims of bycatch. Therefore, the negative fish trend contributed more to the final 52% figure than the decline of any other taxonomic group. Ironically, by trying to decrease error from taxonomic bias in available data, this method allows well-known species to drive the overall trend and does not deal with the problem of underrepresentation of less-studied species. Many of these less visible species, outside of human interest as food or pests, contribute substantially to overall biodiversity and ecosystem function.

Should we believe the shocking headlines? Have we really killed “half of the world’s animals”? Probably not. Conservationists hope that this type of dramatic statement will inspire action but the severity of the claim risks desensitising the public, achieving the opposite of its intended effect. Developing a clear picture of the degree of the threats humans pose to biodiversity is difficult, but imperfect knowledge is no excuse for negligence. We know for certain that we are driving species to extinction at an alarming rate and that this will have serious implications for the environment, economies, and human health. Is this knowledge really not sufficient to motivate urgent and meaningful conservation action?

Olivia Nater is a conservationist and biologist who is particularly fond of bees. Twitter @beeologist

How to win the Olympus Bioscapes photomicrography contest


All you need to win a $5,000 microscope is a $250,000 microscope

It is almost time to dust off your cover-image quality photomicrographs and enter the Olympus Bioscapes microscopy contest. According to the techniques used by contest winners since the contest’s inauguration in 2004, the best way to better your chances is to use a confocal microscope. A side-effect of inventing a technique that wins a Nobel Prize is that eventually it becomes run-of-the-mill, and “conventional” widefield fluorescence also makes a good showing. Biophotonics purists will find plenty to like as well: transmitted light microscopy is well represented in a smattering of techniques including differential interference contrast, Zernike phase, polarised light, Rheinberg illumination and Jamin-Lebedeff interference


Confocal may be at the top of the heap at the moment, but transmitted light technqiues continue to make strong appearances in stunning images among the top-ten places in Olympus bioscapes.

In a promising development, computational imaging techniques also find success in the contest. The broadly termed “computational optics” includes techniques such as structured illumination, in which the patterns in several images (rather uninspiring on their own) are combined to give a computed image with resolution just slightly better than the physically imposed law of diffraction. Also in this category is light sheet microscopy, which creates nice images on its own ( and has since the Ultramikroskop [pdf]from 1900), but is even better suited for combining many images to form a volume image. In my opinion, treating light as computable fields, equally amenable to processing in physical optics or electronics, is the enabling philosophy for the next deluge of discoveries to be made with biomicroscopy.

Compare the winningest techniques from the Olympus contest with those of the Nikon Small World contest below. Interestingly enough, confocal microscopy falls behind the simpler widefield fluorescence in the Nikon contest, and both have been bested throughout the history of the competition by polarised microscopy. Some of the differences in Olympus and Nikon contest winners may be due to the timing of technological breakthroughs. Bioscapes began in 2004, while Small World has been in operation since the late seventies. The vogue techniques and state-of-the-art have certainly evolved over the last four decades.

Nikon Small Wordl Winners

Seeing at Billionths and Billionths

This was my (unsuccessful) entry into last year’s Wellcome Trust Science Writing Prize. It is very similar to the post I published on the 24th of June, taking a slightly different bent on the same theme.

The Latin word skopein underlies the etymology of a set of instruments that laid the foundations for our modern understanding of nature: microscopes. References to the word are recognizable across language barriers thanks to the pervasive influence of the ancient language of scholars, and common usage gives us hints as to the meaning. We scope out a new situation, implying that we not only give a cursory glance but also take some measure or judgement.

Drops of glass held in brass enabled Robert Hooke and Anton van Leuwenhoek to make observations that gave rise to germ theory. Light microscopy unveiled our friends and foes in bacteria, replacing humours and miasmas as primary effectors driving human health and disease. The concept of miasmatic disease, a term that supposes disease is caused by tainted air, is now so far-fetched the term has been almost entirely lost to time. The bird-like masks worn by plague doctors were stuffed with potpourri: the thinking of the time was that fragrance alone could protect against the miasma of black death. The idea seems silly to us now, thanks to the fruits of our inquiry. The cells described by Hooke and “animalcules” seen by Leuwenhoek marked a transition from a world operated by invisible forces to one in which the mechanisms of nature were vulnerable to human scrutiny. In short, science was born in the backs of our eyes.

The ability of an observer using an optical instrument to differentiate between two objects has, until recently, been limited by the tendency of waves to bend at boundaries, a phenomenon known as diffraction. The limiting effects of diffraction were formalised by German physicist Ernst Abbe in 1873. The same effect can be seen in water ripples bending around a pier.

If the precision of optical components is tight enough to eliminate aberrations, and seeing conditions are good enough, imaging is “diffraction-limited.” With the advent of adaptive optics, dynamic mirrors and the like let observers remove aberrations from the sample as well as the optics. Originally developed to spy on dim satellites through a turbulent atmosphere, adaptive optics have recently been applied to microscopy to counteract the blurring effect of looking through tissue. If astronomy is like looking out from underwater, microscopy is like peering at the leaves at the bottom of a hot cuppa, complete with milk.

Even with the precise control afforded by adaptive optics the best possible resolution is still governed by the diffraction limit, about half the wavelength of light. In Leuwenhoek’s time the microbial world had previously been invisible. In the 20th century as well, the molecular machinery underpinning the cellular processes of life have been invisible, smeared by the diffraction limit into an irresolvable blur.

A human cell is typically on the order of about ten microns in diameter. The proteins, membranes, and DNA structure is organised at a level about one-thousandth as large, in the tens and hundreds of nanometres. In a conventional microscope, information at this scale is not retrievable thanks to diffraction, but it underlies all of life. Much of the mechanisms of disease operate at this level as well, and knowledge about how and why cells make mistakes has resounding implications for cancer and aging. In the past few decades physicists and microscopists have developed a number of techniques to go beyond the diffraction limit to measure the nanometric technology that makes life.

A number of techniques have been developed to surpass the diffraction barrier. The techniques vary widely in their use of some form or another of engineered illumination and/or engineered fluorescent proteins to make them work. The thing they have in common is computation: the computer has become as important of an optical component as a proper lens.

New instrumentation enables new measurements at the behest of human inquiry. Questions about biology at increasingly small spatial scales under increasingly challenging imaging contexts generate the need for higher precision techniques, in turn loosening a floodgate on previously unobtainable data. New data lead to new questions, and the cycle continues until it abuts the fundamental laws of physical nature. Before bacteria were discovered, it was impossible to imagine their role in illness and equally impossible to test it. Once the role was known, it became a simple intuitive leap for Alexander Fleming to suppose the growth inhibition of bacteria by fungi he saw in the lab might be useful as medicine. With the ability to see at the level of tens of nanometres, another world of invisible forces has been opened to human consideration and innovation. Scientists have already leaped one barrier at the diffraction limit. With no fundamental limit to human curiosity, let us consider current super-resolution techniques as the first of many triumphs in detection past limits of what is deemed possible.

Rubbish in, Garbage Out?

Extraordinary claims require extraordinary press releases?

You have probably read a headline in the past few weeks stating that NASA has verified that an infamous, seemingly reactionless propulsion drive does in fact produce force. You also might not have read the technical report that spurred the media frenzy (relative to the amount of press coverage normally allocated to space propulsion research, anyway), instead relying on the media reports and their own contracted expert opinion. The twist is that it seems to be the case that no one else- excepting perhaps the participants of the conference it was presented at– has read it either, and this includes myself and likely the authors of almost any other material you find commenting on it. The reason is that the associated entry in the NASA Technical Reports Server only consists of an abstract.

The current upswing of interest and associated speculation on the matter of this strange drive is eerily reminiscent of other recent \begin{sarcasm}groundbreaking discoveries\end{sarcasm}: FTL neutrinos measured by the OPERA experiment and the Arsenic Life bacterium from Mono Lake, California. Both were later refuted, some important people at OPERA ended up resigning, and the Arsenic Life paper continues to boost the impact factors of the authors and publisher as Science Magazine refuses to retract it. (current citations according to Google Scholar number more than 300).

I would venture that the manner of disclosing the OPERA findings was done more responsibly than the Arsenic Life paper. Although both research teams made use of press releases to gain a broad audience for their findings (note this down in your lab notebook as “do not do” if you are a researcher), the OPERA findings were at the pre-publication stage and disclosed as an invitation to greater scrutiny of their instrumentation, while the arsenic life strategy was much less reserved. From the OPERA press release:

The OPERA measurement is at odds with well-established laws of nature, though science frequently progresses by overthrowing the established paradigms. For this reason, many searches have been made for deviations from Einstein’s theory of relativity, so far not finding any such evidence. The strong constraints arising from these observations makes an interpretation of the OPERA measurement in terms of modification of Einstein’s theory unlikely, and give further strong reason to seek new independent measurements.

Notice the description of the search for exceptions to Einstein’s relativity as ” . . . so far not finding any evidence. . .” That despite the data they are reporting doing exactly that if anomalous instrumentation could be ruled out. This was a plea for help, not a claim of triumph.

On the contrary, the press seminar associated with the release of Felisa Wolfe-Simon et al.’s A bacterium that can grow by using arsenic instead of phosphorus issued no such caveats with their claims. Likewise it was readily apparent in the methods sections of their paper that the Arsenic Life team made no strong efforts to refute their own data (the principal aim of experimentation), and the review process at Science should probably have been made more rigorous than standard practice. It is perhaps repeated too often without consideration, but I’ll mention the late, great Carl Sagan’s assertion that “extraordinary claims require extraordinary evidence.” The OPERA team kept this in mind, while the Arsenic Life paper showed a strong preference to sweep under the carpet any due diligence in considering alternative explanations. Ultimately, the OPERA results were explained as an instrumentation error and the Arsenic Life discovery has been refuted in several independent follow-up experiments (i.e. [1][2]).

Is propellant-less propulsion on par with Arsenic Life or FTL neutrinos in terms of communicating findings? In this case I would lean toward the latter: more of a search for instrumentation error than a claim of the discovery of Totally New Physics. The title of the tech report “Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum” denotes the minimum requisite dose of skepticism.

Background reading below, but by far the best take on the subject is xkcd number 1404. The alt-text: “I don’t understand the things you do, and you may therefore represent an interaction with the quantum vacuum virtual plasma.”

23/08/2014 several typos corrected
[UPDATE Full version of tech report: via comments from . . . . . . . .