Double threaded lens cap!

Check it out at http://shpws.me/z698!

Philaephilia?

Philaephilia n. Temporary obsession with logistically important and risky stage of scientific endeavour and cometary rendezvous.

Don’t worry, the condition is entirely transient

Rivalling the 7 minutes of terror as NASA’s Curiosity rover entered the Martian atmosphere, Philae’s descent onto comet 67P/Churyumov-Gerasimenko Wednesday as part of the European Space Agency’s Rosetta mission had the world excited about space again.

Comets don’t have the classic appeal of planets like Mars. The high visibility of Mars missions and moon shots has roots in visions of a Mars covered in seasonal vegetation and full of sexy humans dressed in scraps of leather, and little else. But comets may be much better targets in terms of the scientific benefits. Comets are thought to have added water to early Earth, after the young sun had blasted the substance out to the far reaches of the solar system beyond the realm of the rocky planets. Of course, comets are also of interest for pure novelty: until Philae, humans had never put a machine down on a comet gently. Now the feat has been accomplished three times, albeit a bit awkwardly, with all science instruments surviving two slow bounces and an unplanned landing site. Unfortunate that Philae is limited to only 1.5 hours of sunlight per 12 hour day, but there is some possibility that a last-minute attitude adjustment may have arranged the solar panels a bit more fortuitously.

So if Rosetta’s Philae lander bounced twice, rather than grappling the surface as intended, and landed in a wayward orientation where its solar panels are limited to only 12.5% of nominal sun exposure, how is the mission considered a success?

Most likely, the full significance of the data relayed from Philae via Rosetta will take several months of analysis to uncover. Perhaps some of the experiments will be wholly inconclusive and observational, neither confirming nor denying hypotheses of characteristic structure of comets. For example, it seems unlikely that the MUPUS instrument (i.e. cosmic drill) managed to penetrate a meaningful distance into the comet, and we probably won’t gain much insight concerning the top layers of a comet beyond perhaps a centimetre or so. In contrast, CONSERT may yield unprecedented observations about the interior makeup of a comet.

In science, failures and negative findings are certainly more conclusive, and arguably more preferable, than so-called positive results, despite the selective pressure for the latter in science careers and the lay press. An exception disproves the rule, but a finding in agreement with theory merely “fails to negate” said theory. For example, we now know better than to use nitrocellulose as a vacuum propellant. Lesson learned on that front.

In addition to a something-divided-by-nothing fold increase in knowledge about the specific scenario of attempting a soft landing on a comet, I’d suggest we now know a bit more about the value of autonomy in expeditions where the beck-and-call from mission control to operations obviates real time feedback. Perhaps if Philae had been optimised for adaptability, it would have been able to maintain orientation to the comet surface and give Rosetta and scientists at home a better idea of its (final) resting place after detecting that the touchdown and grapple didn’t go through. Space science is necessarily cautious, but adaptive neural networks and other alternative avenues may prove useful in future missions.

I’ll eagerly await the aftermath, when the experimental and the telemetry data have been further analysed. The kind of space mission where a landing sequence can omit a major step and still have operational success of all scientific instruments on board is the kind of mission that space agencies should focus on. The Rosetta/Philae mission combined key elements of novelty (first soft landing and persistent orbiting of a comet) low cost (comparable to a fewspace shuttle missions), and robustness (grapples didn’t fire, comet bounced and got lost, science still occurred). Perhaps we’ll see continued ventures from international space agencies into novel, science-driven expeditions. Remember, the first scientist on the moon was on the (so far) final manned mission to Luna. Missions in the style of Rosetta may be more effective and valuable on all three of the above points, and are definitely more fundamental in terms of science achieved, than continuous returns to Mars and pushes for manned missions. In a perfect world where space agencies operate in a non-zero sum funding situation along with all the other major challenges faced by human society, we would pursue them all. But realistically, Philae has shown that not only do alternative missions potentially offer more for us to learn in terms ofscience and engineering, but can also enrapture the population in a transcendent endeavour. Don’t stop following the clever madness of humans pursuing their fundamental nature of exploring the universe they live in.

The advantages of parametric design

I work primarily in OpenSCAD when making designs for 3D printing (and 2D designs for lasercutting). This means that instead of a WYSIWYG interface based primarily on using the mouse, my designs are all scripted in a programming language that looks a lot like C. This might seem a bit more difficult at first (and it is certainly less than ideal for some situations) but it makes for a pretty simple way to generate repetitive structural elements in basic flow control, i.e. for loops. Even more important, it means that I can substantially change a design by modifying the variable values passed to a function (called modules in OpenSCAD). For the sake of an example, take Lieberkühn reflectors for macrophotography. Lieberkühn reflectors are a classic illumination technique that have mostly fallen out of style in favour of more modern illumination such as LED or fibre-based lighting, but remains quite elegant and offers a few unique advantages. I have been working with these in conjunction with a few different lenses, and mostly with the help of a macro bellows. The bellows makes for variable working distances as well as magnifications, so the focus of one Lieberkühn will be the most effective only within a narrow range of macro-bellows lengths. Parametric designs such as the ones I create and work with in OpenSCAD allow me to change attributes such as the nominal working distance without starting each design from scratch. For example:

LRWD35

35mm Lieberkühn focus

LRWD30

30mm Lieberkühn focus

LRWD25

25mm Lieberkühn focus

LRWD20

20mm Lieberkühn focus

This approach has proven highly useful for me in terms of both creating highly customisable design and iterating to get fit just right. I’ll post results of my latest exploration of Lieberkühn reflectors soon after I receive the latest realisation in Shapeways bronzed steel.

Have we really lost 52% of the world’s animals?

The methods used by the LPI should not be accepted without reservation

Turning a critical eye on the 2014 Living Planet Report.

WWF’s Living Planet Report (LPR) 2014 has been making headlines because of its alarming claim that population sizes of mammals, birds, reptiles, amphibians and fish have dropped by half since 1970. The report reached this stark (and widely shared) conclusion via the Living Planet Index (LPI) a “measure of the state of the world’s biological diversity based on population trends of vertebrate species from terrestrial, freshwater and marine habitats” developed by scientists at WWF and the Zoological Society of London (ZSL). The LPI was adopted by the Convention on Biological Diversity (CBD) as a progress indicator for its 2020 goal to “take effective and urgent action to halt the loss of biodiversity”, which sadly (but unsurprisingly) appears to be failing.

In the previous edition of the LPR published two years ago, the drop in vertebrate numbers was estimated to be 30%. Now the scientists behind the LPI claim to have improved the method, resulting in a much greater decrease (52%) than previously reported. But the methodology is still highly controversial.

The team estimated trends in 10,380 populations of 3,038 mammal, bird, reptile, amphibian and fish species using 2,337 data sources including published scientific literature, online databases, and grey literature. The data used in constructing the index are time series of either population size, density, abundance or a “proxy of abundance”, e.g. bird nest density when there were no bird counts available.

The collection and analyses of these data represent an enormous amount of work and the team responsible deserves praise for undertaking this huge project and for creating an urgent call to action for wildlife conservation. However, we need to bear in mind that this dramatic “halving” of the word’s vertebrates is a grotesque oversimplification of biodiversity loss. The diversity of data sources and types used, the variability in data quality, as well as the uncertainty behind many of the population trend estimates mean that the LPI is probably not very reliable.

Additionally, the 3,038 species included in the analyses represent only 4.8% of the world’s 62,839 described vertebrate species. (The report entirely omits invertebrates, which are often cornerstone species and vastly outnumber all vertebrate animals). Following criticism on the methodology of previous LPIs, this year the LPI team used the estimated number of species in different taxonomic groups and biogeographic areas to apply weightings to the data. This means that the population trend of a particular taxonomic group becomes more important if the group comprises a large number of species, whereas the population trend of a species-poor taxon is allocated considerably less weight. To illustrate this, let us consider fishes, which in the LPI analysis represent the largest proportion of vertebrate species in almost all biogeographic areas and therefore carry the most weight. My guess is that the fish species whose population trends are sufficiently documented to be included in the analysis are most often in serious decline, because well-studied species are usually those that are either overharvested or frequent victims of bycatch. Therefore, the negative fish trend contributed more to the final 52% figure than the decline of any other taxonomic group. Ironically, by trying to decrease error from taxonomic bias in available data, this method allows well-known species to drive the overall trend and does not deal with the problem of underrepresentation of less-studied species. Many of these less visible species, outside of human interest as food or pests, contribute substantially to overall biodiversity and ecosystem function.

Should we believe the shocking headlines? Have we really killed “half of the world’s animals”? Probably not. Conservationists hope that this type of dramatic statement will inspire action but the severity of the claim risks desensitising the public, achieving the opposite of its intended effect. Developing a clear picture of the degree of the threats humans pose to biodiversity is difficult, but imperfect knowledge is no excuse for negligence. We know for certain that we are driving species to extinction at an alarming rate and that this will have serious implications for the environment, economies, and human health. Is this knowledge really not sufficient to motivate urgent and meaningful conservation action?

Olivia Nater is a conservationist and biologist who is particularly fond of bees. Twitter @beeologist

How to win the Olympus Bioscapes photomicrography contest

_20140923_215021_2

All you need to win a $5,000 microscope is a $250,000 microscope

It is almost time to dust off your cover-image quality photomicrographs and enter the Olympus Bioscapes microscopy contest. According to the techniques used by contest winners since the contest’s inauguration in 2004, the best way to better your chances is to use a confocal microscope. A side-effect of inventing a technique that wins a Nobel Prize is that eventually it becomes run-of-the-mill, and “conventional” widefield fluorescence also makes a good showing. Biophotonics purists will find plenty to like as well: transmitted light microscopy is well represented in a smattering of techniques including differential interference contrast, Zernike phase, polarised light, Rheinberg illumination and Jamin-Lebedeff interference

image3301

Confocal may be at the top of the heap at the moment, but transmitted light technqiues continue to make strong appearances in stunning images among the top-ten places in Olympus bioscapes.

In a promising development, computational imaging techniques also find success in the contest. The broadly termed “computational optics” includes techniques such as structured illumination, in which the patterns in several images (rather uninspiring on their own) are combined to give a computed image with resolution just slightly better than the physically imposed law of diffraction. Also in this category is light sheet microscopy, which creates nice images on its own ( and has since the Ultramikroskop [pdf]from 1900), but is even better suited for combining many images to form a volume image. In my opinion, treating light as computable fields, equally amenable to processing in physical optics or electronics, is the enabling philosophy for the next deluge of discoveries to be made with biomicroscopy.

Compare the winningest techniques from the Olympus contest with those of the Nikon Small World contest below. Interestingly enough, confocal microscopy falls behind the simpler widefield fluorescence in the Nikon contest, and both have been bested throughout the history of the competition by polarised microscopy. Some of the differences in Olympus and Nikon contest winners may be due to the timing of technological breakthroughs. Bioscapes began in 2004, while Small World has been in operation since the late seventies. The vogue techniques and state-of-the-art have certainly evolved over the last four decades.

Nikon Small Wordl Winners

Seeing at Billionths and Billionths

Sketch10381924
This was my (unsuccessful) entry into last year’s Wellcome Trust Science Writing Prize. It is very similar to the post I published on the 24th of June, taking a slightly different bent on the same theme.

The Latin word skopein underlies the etymology of a set of instruments that laid the foundations for our modern understanding of nature: microscopes. References to the word are recognizable across language barriers thanks to the pervasive influence of the ancient language of scholars, and common usage gives us hints as to the meaning. We scope out a new situation, implying that we not only give a cursory glance but also take some measure or judgement.

Drops of glass held in brass enabled Robert Hooke and Anton van Leuwenhoek to make observations that gave rise to germ theory. Light microscopy unveiled our friends and foes in bacteria, replacing humours and miasmas as primary effectors driving human health and disease. The concept of miasmatic disease, a term that supposes disease is caused by tainted air, is now so far-fetched the term has been almost entirely lost to time. The bird-like masks worn by plague doctors were stuffed with potpourri: the thinking of the time was that fragrance alone could protect against the miasma of black death. The idea seems silly to us now, thanks to the fruits of our inquiry. The cells described by Hooke and “animalcules” seen by Leuwenhoek marked a transition from a world operated by invisible forces to one in which the mechanisms of nature were vulnerable to human scrutiny. In short, science was born in the backs of our eyes.

The ability of an observer using an optical instrument to differentiate between two objects has, until recently, been limited by the tendency of waves to bend at boundaries, a phenomenon known as diffraction. The limiting effects of diffraction were formalised by German physicist Ernst Abbe in 1873. The same effect can be seen in water ripples bending around a pier.

If the precision of optical components is tight enough to eliminate aberrations, and seeing conditions are good enough, imaging is “diffraction-limited.” With the advent of adaptive optics, dynamic mirrors and the like let observers remove aberrations from the sample as well as the optics. Originally developed to spy on dim satellites through a turbulent atmosphere, adaptive optics have recently been applied to microscopy to counteract the blurring effect of looking through tissue. If astronomy is like looking out from underwater, microscopy is like peering at the leaves at the bottom of a hot cuppa, complete with milk.

Even with the precise control afforded by adaptive optics the best possible resolution is still governed by the diffraction limit, about half the wavelength of light. In Leuwenhoek’s time the microbial world had previously been invisible. In the 20th century as well, the molecular machinery underpinning the cellular processes of life have been invisible, smeared by the diffraction limit into an irresolvable blur.

A human cell is typically on the order of about ten microns in diameter. The proteins, membranes, and DNA structure is organised at a level about one-thousandth as large, in the tens and hundreds of nanometres. In a conventional microscope, information at this scale is not retrievable thanks to diffraction, but it underlies all of life. Much of the mechanisms of disease operate at this level as well, and knowledge about how and why cells make mistakes has resounding implications for cancer and aging. In the past few decades physicists and microscopists have developed a number of techniques to go beyond the diffraction limit to measure the nanometric technology that makes life.

A number of techniques have been developed to surpass the diffraction barrier. The techniques vary widely in their use of some form or another of engineered illumination and/or engineered fluorescent proteins to make them work. The thing they have in common is computation: the computer has become as important of an optical component as a proper lens.

New instrumentation enables new measurements at the behest of human inquiry. Questions about biology at increasingly small spatial scales under increasingly challenging imaging contexts generate the need for higher precision techniques, in turn loosening a floodgate on previously unobtainable data. New data lead to new questions, and the cycle continues until it abuts the fundamental laws of physical nature. Before bacteria were discovered, it was impossible to imagine their role in illness and equally impossible to test it. Once the role was known, it became a simple intuitive leap for Alexander Fleming to suppose the growth inhibition of bacteria by fungi he saw in the lab might be useful as medicine. With the ability to see at the level of tens of nanometres, another world of invisible forces has been opened to human consideration and innovation. Scientists have already leaped one barrier at the diffraction limit. With no fundamental limit to human curiosity, let us consider current super-resolution techniques as the first of many triumphs in detection past limits of what is deemed possible.

Rubbish in, Garbage Out?

Extraordinary claims require extraordinary press releases?

You have probably read a headline in the past few weeks stating that NASA has verified that an infamous, seemingly reactionless propulsion drive does in fact produce force. You also might not have read the technical report that spurred the media frenzy (relative to the amount of press coverage normally allocated to space propulsion research, anyway), instead relying on the media reports and their own contracted expert opinion. The twist is that it seems to be the case that no one else- excepting perhaps the participants of the conference it was presented at– has read it either, and this includes myself and likely the authors of almost any other material you find commenting on it. The reason is that the associated entry in the NASA Technical Reports Server only consists of an abstract.

The current upswing of interest and associated speculation on the matter of this strange drive is eerily reminiscent of other recent \begin{sarcasm}groundbreaking discoveries\end{sarcasm}: FTL neutrinos measured by the OPERA experiment and the Arsenic Life bacterium from Mono Lake, California. Both were later refuted, some important people at OPERA ended up resigning, and the Arsenic Life paper continues to boost the impact factors of the authors and publisher as Science Magazine refuses to retract it. (current citations according to Google Scholar number more than 300).

I would venture that the manner of disclosing the OPERA findings was done more responsibly than the Arsenic Life paper. Although both research teams made use of press releases to gain a broad audience for their findings (note this down in your lab notebook as “do not do” if you are a researcher), the OPERA findings were at the pre-publication stage and disclosed as an invitation to greater scrutiny of their instrumentation, while the arsenic life strategy was much less reserved. From the OPERA press release:

The OPERA measurement is at odds with well-established laws of nature, though science frequently progresses by overthrowing the established paradigms. For this reason, many searches have been made for deviations from Einstein’s theory of relativity, so far not finding any such evidence. The strong constraints arising from these observations makes an interpretation of the OPERA measurement in terms of modification of Einstein’s theory unlikely, and give further strong reason to seek new independent measurements.

Notice the description of the search for exceptions to Einstein’s relativity as ” . . . so far not finding any evidence. . .” That despite the data they are reporting doing exactly that if anomalous instrumentation could be ruled out. This was a plea for help, not a claim of triumph.

On the contrary, the press seminar associated with the release of Felisa Wolfe-Simon et al.’s A bacterium that can grow by using arsenic instead of phosphorus issued no such caveats with their claims. Likewise it was readily apparent in the methods sections of their paper that the Arsenic Life team made no strong efforts to refute their own data (the principal aim of experimentation), and the review process at Science should probably have been made more rigorous than standard practice. It is perhaps repeated too often without consideration, but I’ll mention the late, great Carl Sagan’s assertion that “extraordinary claims require extraordinary evidence.” The OPERA team kept this in mind, while the Arsenic Life paper showed a strong preference to sweep under the carpet any due diligence in considering alternative explanations. Ultimately, the OPERA results were explained as an instrumentation error and the Arsenic Life discovery has been refuted in several independent follow-up experiments (i.e. [1][2]).

Is propellant-less propulsion on par with Arsenic Life or FTL neutrinos in terms of communicating findings? In this case I would lean toward the latter: more of a search for instrumentation error than a claim of the discovery of Totally New Physics. The title of the tech report “Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum” denotes the minimum requisite dose of skepticism.

Background reading below, but by far the best take on the subject is xkcd number 1404. The alt-text: “I don’t understand the things you do, and you may therefore represent an interaction with the quantum vacuum virtual plasma.”

23/08/2014 several typos corrected
[UPDATE Full version of tech report: http://rghost.net/57230791%5D via comments from http://ow.ly/ADJqb .
http://www.wired.co.uk/news/archive/2014-07/31/nasa-validates-impossible-space-drive .
http://www.wired.co.uk/news/archive/2014-08/07/10-qs-about-nasa-impossible-drive .
http://www.wired.com/2014/08/why-nasas-physics-defying-space-engine-is-probably-bogus/?mbid=social_twitter .
http://en.wikipedia.org/wiki/Quantum_vacuum_plasma_thruster .
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140006052.pdf .
http://www.wired.co.uk/news/archive/2013-02/06/emdrive-and-cold-fusion .
http://www.aiaa.org/EventDetail.aspx?id=18582 .