Have we really lost 52% of the world’s animals?

The methods used by the LPI should not be accepted without reservation

Turning a critical eye on the 2014 Living Planet Report.

WWF’s Living Planet Report (LPR) 2014 has been making headlines because of its alarming claim that population sizes of mammals, birds, reptiles, amphibians and fish have dropped by half since 1970. The report reached this stark (and widely shared) conclusion via the Living Planet Index (LPI) a “measure of the state of the world’s biological diversity based on population trends of vertebrate species from terrestrial, freshwater and marine habitats” developed by scientists at WWF and the Zoological Society of London (ZSL). The LPI was adopted by the Convention on Biological Diversity (CBD) as a progress indicator for its 2020 goal to “take effective and urgent action to halt the loss of biodiversity”, which sadly (but unsurprisingly) appears to be failing.

In the previous edition of the LPR published two years ago, the drop in vertebrate numbers was estimated to be 30%. Now the scientists behind the LPI claim to have improved the method, resulting in a much greater decrease (52%) than previously reported. But the methodology is still highly controversial.

The team estimated trends in 10,380 populations of 3,038 mammal, bird, reptile, amphibian and fish species using 2,337 data sources including published scientific literature, online databases, and grey literature. The data used in constructing the index are time series of either population size, density, abundance or a “proxy of abundance”, e.g. bird nest density when there were no bird counts available.

The collection and analyses of these data represent an enormous amount of work and the team responsible deserves praise for undertaking this huge project and for creating an urgent call to action for wildlife conservation. However, we need to bear in mind that this dramatic “halving” of the word’s vertebrates is a grotesque oversimplification of biodiversity loss. The diversity of data sources and types used, the variability in data quality, as well as the uncertainty behind many of the population trend estimates mean that the LPI is probably not very reliable.

Additionally, the 3,038 species included in the analyses represent only 4.8% of the world’s 62,839 described vertebrate species. (The report entirely omits invertebrates, which are often cornerstone species and vastly outnumber all vertebrate animals). Following criticism on the methodology of previous LPIs, this year the LPI team used the estimated number of species in different taxonomic groups and biogeographic areas to apply weightings to the data. This means that the population trend of a particular taxonomic group becomes more important if the group comprises a large number of species, whereas the population trend of a species-poor taxon is allocated considerably less weight. To illustrate this, let us consider fishes, which in the LPI analysis represent the largest proportion of vertebrate species in almost all biogeographic areas and therefore carry the most weight. My guess is that the fish species whose population trends are sufficiently documented to be included in the analysis are most often in serious decline, because well-studied species are usually those that are either overharvested or frequent victims of bycatch. Therefore, the negative fish trend contributed more to the final 52% figure than the decline of any other taxonomic group. Ironically, by trying to decrease error from taxonomic bias in available data, this method allows well-known species to drive the overall trend and does not deal with the problem of underrepresentation of less-studied species. Many of these less visible species, outside of human interest as food or pests, contribute substantially to overall biodiversity and ecosystem function.

Should we believe the shocking headlines? Have we really killed “half of the world’s animals”? Probably not. Conservationists hope that this type of dramatic statement will inspire action but the severity of the claim risks desensitising the public, achieving the opposite of its intended effect. Developing a clear picture of the degree of the threats humans pose to biodiversity is difficult, but imperfect knowledge is no excuse for negligence. We know for certain that we are driving species to extinction at an alarming rate and that this will have serious implications for the environment, economies, and human health. Is this knowledge really not sufficient to motivate urgent and meaningful conservation action?

Olivia Nater is a conservationist and biologist who is particularly fond of bees. Twitter @beeologist

How to win the Olympus Bioscapes photomicrography contest


All you need to win a $5,000 microscope is a $250,000 microscope

It is almost time to dust off your cover-image quality photomicrographs and enter the Olympus Bioscapes microscopy contest. According to the techniques used by contest winners since the contest’s inauguration in 2004, the best way to better your chances is to use a confocal microscope. A side-effect of inventing a technique that wins a Nobel Prize is that eventually it becomes run-of-the-mill, and “conventional” widefield fluorescence also makes a good showing. Biophotonics purists will find plenty to like as well: transmitted light microscopy is well represented in a smattering of techniques including differential interference contrast, Zernike phase, polarised light, Rheinberg illumination and Jamin-Lebedeff interference


Confocal may be at the top of the heap at the moment, but transmitted light technqiues continue to make strong appearances in stunning images among the top-ten places in Olympus bioscapes.

In a promising development, computational imaging techniques also find success in the contest. The broadly termed “computational optics” includes techniques such as structured illumination, in which the patterns in several images (rather uninspiring on their own) are combined to give a computed image with resolution just slightly better than the physically imposed law of diffraction. Also in this category is light sheet microscopy, which creates nice images on its own ( and has since the Ultramikroskop [pdf]from 1900), but is even better suited for combining many images to form a volume image. In my opinion, treating light as computable fields, equally amenable to processing in physical optics or electronics, is the enabling philosophy for the next deluge of discoveries to be made with biomicroscopy.

Compare the winningest techniques from the Olympus contest with those of the Nikon Small World contest below. Interestingly enough, confocal microscopy falls behind the simpler widefield fluorescence in the Nikon contest, and both have been bested throughout the history of the competition by polarised microscopy. Some of the differences in Olympus and Nikon contest winners may be due to the timing of technological breakthroughs. Bioscapes began in 2004, while Small World has been in operation since the late seventies. The vogue techniques and state-of-the-art have certainly evolved over the last four decades.

Nikon Small Wordl Winners

Seeing at Billionths and Billionths

This was my (unsuccessful) entry into last year’s Wellcome Trust Science Writing Prize. It is very similar to the post I published on the 24th of June, taking a slightly different bent on the same theme.

The Latin word skopein underlies the etymology of a set of instruments that laid the foundations for our modern understanding of nature: microscopes. References to the word are recognizable across language barriers thanks to the pervasive influence of the ancient language of scholars, and common usage gives us hints as to the meaning. We scope out a new situation, implying that we not only give a cursory glance but also take some measure or judgement.

Drops of glass held in brass enabled Robert Hooke and Anton van Leuwenhoek to make observations that gave rise to germ theory. Light microscopy unveiled our friends and foes in bacteria, replacing humours and miasmas as primary effectors driving human health and disease. The concept of miasmatic disease, a term that supposes disease is caused by tainted air, is now so far-fetched the term has been almost entirely lost to time. The bird-like masks worn by plague doctors were stuffed with potpourri: the thinking of the time was that fragrance alone could protect against the miasma of black death. The idea seems silly to us now, thanks to the fruits of our inquiry. The cells described by Hooke and “animalcules” seen by Leuwenhoek marked a transition from a world operated by invisible forces to one in which the mechanisms of nature were vulnerable to human scrutiny. In short, science was born in the backs of our eyes.

The ability of an observer using an optical instrument to differentiate between two objects has, until recently, been limited by the tendency of waves to bend at boundaries, a phenomenon known as diffraction. The limiting effects of diffraction were formalised by German physicist Ernst Abbe in 1873. The same effect can be seen in water ripples bending around a pier.

If the precision of optical components is tight enough to eliminate aberrations, and seeing conditions are good enough, imaging is “diffraction-limited.” With the advent of adaptive optics, dynamic mirrors and the like let observers remove aberrations from the sample as well as the optics. Originally developed to spy on dim satellites through a turbulent atmosphere, adaptive optics have recently been applied to microscopy to counteract the blurring effect of looking through tissue. If astronomy is like looking out from underwater, microscopy is like peering at the leaves at the bottom of a hot cuppa, complete with milk.

Even with the precise control afforded by adaptive optics the best possible resolution is still governed by the diffraction limit, about half the wavelength of light. In Leuwenhoek’s time the microbial world had previously been invisible. In the 20th century as well, the molecular machinery underpinning the cellular processes of life have been invisible, smeared by the diffraction limit into an irresolvable blur.

A human cell is typically on the order of about ten microns in diameter. The proteins, membranes, and DNA structure is organised at a level about one-thousandth as large, in the tens and hundreds of nanometres. In a conventional microscope, information at this scale is not retrievable thanks to diffraction, but it underlies all of life. Much of the mechanisms of disease operate at this level as well, and knowledge about how and why cells make mistakes has resounding implications for cancer and aging. In the past few decades physicists and microscopists have developed a number of techniques to go beyond the diffraction limit to measure the nanometric technology that makes life.

A number of techniques have been developed to surpass the diffraction barrier. The techniques vary widely in their use of some form or another of engineered illumination and/or engineered fluorescent proteins to make them work. The thing they have in common is computation: the computer has become as important of an optical component as a proper lens.

New instrumentation enables new measurements at the behest of human inquiry. Questions about biology at increasingly small spatial scales under increasingly challenging imaging contexts generate the need for higher precision techniques, in turn loosening a floodgate on previously unobtainable data. New data lead to new questions, and the cycle continues until it abuts the fundamental laws of physical nature. Before bacteria were discovered, it was impossible to imagine their role in illness and equally impossible to test it. Once the role was known, it became a simple intuitive leap for Alexander Fleming to suppose the growth inhibition of bacteria by fungi he saw in the lab might be useful as medicine. With the ability to see at the level of tens of nanometres, another world of invisible forces has been opened to human consideration and innovation. Scientists have already leaped one barrier at the diffraction limit. With no fundamental limit to human curiosity, let us consider current super-resolution techniques as the first of many triumphs in detection past limits of what is deemed possible.

Rubbish in, Garbage Out?

Extraordinary claims require extraordinary press releases?

You have probably read a headline in the past few weeks stating that NASA has verified that an infamous, seemingly reactionless propulsion drive does in fact produce force. You also might not have read the technical report that spurred the media frenzy (relative to the amount of press coverage normally allocated to space propulsion research, anyway), instead relying on the media reports and their own contracted expert opinion. The twist is that it seems to be the case that no one else- excepting perhaps the participants of the conference it was presented at– has read it either, and this includes myself and likely the authors of almost any other material you find commenting on it. The reason is that the associated entry in the NASA Technical Reports Server only consists of an abstract.

The current upswing of interest and associated speculation on the matter of this strange drive is eerily reminiscent of other recent \begin{sarcasm}groundbreaking discoveries\end{sarcasm}: FTL neutrinos measured by the OPERA experiment and the Arsenic Life bacterium from Mono Lake, California. Both were later refuted, some important people at OPERA ended up resigning, and the Arsenic Life paper continues to boost the impact factors of the authors and publisher as Science Magazine refuses to retract it. (current citations according to Google Scholar number more than 300).

I would venture that the manner of disclosing the OPERA findings was done more responsibly than the Arsenic Life paper. Although both research teams made use of press releases to gain a broad audience for their findings (note this down in your lab notebook as “do not do” if you are a researcher), the OPERA findings were at the pre-publication stage and disclosed as an invitation to greater scrutiny of their instrumentation, while the arsenic life strategy was much less reserved. From the OPERA press release:

The OPERA measurement is at odds with well-established laws of nature, though science frequently progresses by overthrowing the established paradigms. For this reason, many searches have been made for deviations from Einstein’s theory of relativity, so far not finding any such evidence. The strong constraints arising from these observations makes an interpretation of the OPERA measurement in terms of modification of Einstein’s theory unlikely, and give further strong reason to seek new independent measurements.

Notice the description of the search for exceptions to Einstein’s relativity as ” . . . so far not finding any evidence. . .” That despite the data they are reporting doing exactly that if anomalous instrumentation could be ruled out. This was a plea for help, not a claim of triumph.

On the contrary, the press seminar associated with the release of Felisa Wolfe-Simon et al.’s A bacterium that can grow by using arsenic instead of phosphorus issued no such caveats with their claims. Likewise it was readily apparent in the methods sections of their paper that the Arsenic Life team made no strong efforts to refute their own data (the principal aim of experimentation), and the review process at Science should probably have been made more rigorous than standard practice. It is perhaps repeated too often without consideration, but I’ll mention the late, great Carl Sagan’s assertion that “extraordinary claims require extraordinary evidence.” The OPERA team kept this in mind, while the Arsenic Life paper showed a strong preference to sweep under the carpet any due diligence in considering alternative explanations. Ultimately, the OPERA results were explained as an instrumentation error and the Arsenic Life discovery has been refuted in several independent follow-up experiments (i.e. [1][2]).

Is propellant-less propulsion on par with Arsenic Life or FTL neutrinos in terms of communicating findings? In this case I would lean toward the latter: more of a search for instrumentation error than a claim of the discovery of Totally New Physics. The title of the tech report “Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum” denotes the minimum requisite dose of skepticism.

Background reading below, but by far the best take on the subject is xkcd number 1404. The alt-text: “I don’t understand the things you do, and you may therefore represent an interaction with the quantum vacuum virtual plasma.”

23/08/2014 several typos corrected
[UPDATE Full version of tech report: http://rghost.net/57230791%5D via comments from http://ow.ly/ADJqb .
http://www.wired.co.uk/news/archive/2014-07/31/nasa-validates-impossible-space-drive .
http://www.wired.co.uk/news/archive/2014-08/07/10-qs-about-nasa-impossible-drive .
http://www.wired.com/2014/08/why-nasas-physics-defying-space-engine-is-probably-bogus/?mbid=social_twitter .
http://en.wikipedia.org/wiki/Quantum_vacuum_plasma_thruster .
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140006052.pdf .
http://www.wired.co.uk/news/archive/2013-02/06/emdrive-and-cold-fusion .
http://www.aiaa.org/EventDetail.aspx?id=18582 .

Why is there no confidence in science journalism?


Living in the so-called anthropocene, meaningful participation in humanity’s trajectory requires scientific literacy. This requirement is a necessity at the population level, it is not enough for a small proportion of select individuals to develop this expertise, applying them only to the avenues of their own interest. Rather, a general understanding and use of the scientific method in forming actionable ideas for modern problems is a requisite for a public capable of steering policy along a survivable route. As an added benefit, scientific literacy produces a rarely avoided side-effect of knowing one or two things for certain, and touching upon the numinous of the universe.

Statistical literacy is a necessary foundation for building scientific literacy. Widespread confusion about the meaning of such terms as “statistical significance” (compounded by non-standard usage of the term “significance” on its own) abounds, resulting in little to no transferability of the import of these concepts when scientific results are described in mainstream publications. What’s worse, this results in a jaded public knowing just enough to twist the jargon of science to support their own predetermined, potentially dangerous, conclusions (e.g. because scientific theories can be refuted by evidence to the contrary, a given theory, no matter the level of support by existing data, can be ignored when forming personal and policy decisions).

I posit that a fair amount of the responsibility for improving the state of non-specialist scientific literacy lies with science journalists at all scales. The most popular science-branded media does little to nothing in imparting a sense of the scientific method, the context and contribution of published experiments, and the meaning of statistics underlying the claims. I suggest that a standardisation of language for describing scientific results is warranted, so that results and concepts can be communicated in an intuitive manner without resorting to condescension, as well as conferring the quantitative, comparable values used to form scientific conclusions.

A good place to start (though certainly not perfect) is the uncertainty guidance put out by the Intergovernmental Panel on Climate Change (IPCC). The IPCC reports benefit from translating statistical concepts of confidence and likelihood into intuitive terms without sacrificing the underlying quantitative meaning (mostly). In the IPCC AR5 report guidance on addressing uncertainty [pdf], likelihood statements of probability are standardised as follows:


In the fourth assessment report (AR4), the guidance [pdf] roughly calibrated confidence statements to a chance of being correct. I’ve written the guidance here in terms of p-values, or the chance that results are due to coincidence (p = 0.10 = 10% chance), but statistical tests producing other measurements of confidence were also covered.


The description of results via their confidence rather than statistical significance, which is normally used, is probably more intuitive to most people. Few people in general readership readily discern between statistical significance, i.e. the results are likely to not be due to chance, and meaningful significance, i.e. the results matter in some way. Likewise, statistical significance statements are not even very well established in scientific literature and vary widely by field. That being said, the IPCC’s AR4 guidance threshold for very high confidence is quite low. Many scientific results are only considered reportable at a p-value of less than 0.05, or 5% chance of being an experimental artifact in the data due to coincidence, whereas the AR4 guidance links a statement of very high confidence to anything with less than a 10% chance of being wrong. Likewise, a 5-in-10 chance of being correct hardly merits a statement of medium confidence in my opinion. Despite these limitations, I think the guidance should have been merely updated to better reflect the statistical reality of confidenceand it was a mistake for the guidance for AR5 to switch to purely qualitative standards for conveying confidence based on the table below, with highest confidence in the top right and lowest confidence in the bottom left.


Adoption (and adaptation) of standards like these in regular usage by journalist could do a lot to better the communication of science to a general readership. This would normalise field-variable technical jargon (e.g. sigma significance values in particle physics, p-values in biology) and reduce the need for daft analogies. Results described in this way would be amenable to meaningful comparison by generally interested but non-specialist audiences, while those with a little practice in statistics won’t be any less informed by dumbing-down the meaning.

Edited 2016/06/25 for a better title, added comic graphic. Source for file of cover design by Norman Saunders (Public Domain)
23 Aug. 2014: typo in first paragraph corrected:

. . . meaningful participation in participating in humanity’s trajectory. . .


Michael D. Mastrandrea et al. Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. IPCC Cross-Working Group Meeting on Consistent Treatment of Uncertainties. Jasper Ridge, CA, USA 6-7 July 2010. <http://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf&gt;

IPCC. Guidance Notes for Lead Authors of the IPCC Fourth Assessment Report on Addressing Uncertainties. July 2005. <https://www.ipcc-wg1.unibe.ch/publications/supportingmaterial/uncertainty-guidance-note.pdf&gt;

Much ado about sitting

A few years ago, athletic shoe companies began to cash in on a study or two suggesting that running in shoes was dangerous, guaranteed to ruin your joints and your life, make you less attractive and confident, etc. (at least, that’s how it was translated to press coverage). The only viable answer, vested marketing implied, was to buy a new pair of shoes with less shoe in them.

Despite the obvious irony, consumers flocked in droves to purchase sweet new kicks and rectify their embarrassing running habits. Much like any other fitness craze, popular active-lifestyle magazines ran articles about the trend spinning a small amount of scientific research into definitive conclusions right next to advertisements for the shoes themselves. Fast forward to 2014 wherein the makers of arguably the most notorious shoes in the minimalist sector, the Vibram Five Fingers line, have moved to settle a lawsuit alleging the claimed health benefits of the shoes were not based on evidence. The market frenzy for minimalist footwear appears to have sharply abated. There are even blatant examples of market backlash in the introduction of what could be described as “marshmallow shoes,” such as the Hakko, with even more padding than runners were used to before the barefoot revolution.

An eerily similar phenomenon has appeared, i.e. market capitalisation on nascent scientific evidence, in the latest demon threatening our health: sitting. At the bottom of the orogenic marketplace for accessories designed to get workers a bit less semi-recumbent in the workplace. This market was virtually non-existent only a few years ago, yet now is substantial enough to have spawned an entire genre of internet article.

There is even a new term gaining traction for the condition: “sitting disease.” I sure hope it’s not catching. For now at least the term seems to remain quarantined in quotation marks most places it is used.

Many of the underlying articles in science journals are what is euphemistically referred to as survey science. Long generational time, lack of uniform cultivation standards and potential ethical considerations make Homo sapiens a rather poor model organism. Even if survey data were considered reliable (a dubious assumption), this only reveals associations. Even accelerometer studies, like those at the Mayo Clinic, only measure activity for a few weeks. The results can’t tell you that sitting alone causes obesity. An equally fair hypothesis would be that obesity increase the likelihood to stay sitting, but that’s just called inertia.

Although the studies and their press coverage motivate a burgeoning marketplace for NEAT accessories they don’t actually tell us much in the way of new information. A sedentary lifestyle is unhealthy. Attempts to increase the amount of low-intensity activity throughout the day, such as using a walking desk, are likely to motivate appetite. Without considering diet (and downplaying the importance of exercise), a standing desk, sitting ball, or occasional walking meeting is not likely to have tremendous health benefits when taken alone. And despite the rhetoric, maintaining a smoking habit to break up your sit-time with walks to the outdoors is probably not an equivalent trade-off. Presenting health management in such an unbalanced, single-variable way seems more motivated by trendiness for some, revenue for others, and both for the press. It is not that sitting is actually good for you, it’s just myopic to focus solely on that one health factor. As part of a a sedentary lifestyle gestalt, yes, it does play a role in promoting ill-health. Then again, if you think about it, you probably already knew that before it was cool.

Avoid sensationalist science journalism, consider the sources:
Ford, E.S., and Caspersen, C.J. (2012). Sedentary behaviour and cardiovascular disease: a review of prospective studies. Int J Epidemiol 41, 1338–1353.
Hamilton, M.T., Hamilton, D.G., and Zderic, T.W. (2007). Role of low energy expenditure and sitting in obesity, metabolic syndrome, type 2 diabetes, and cardiovascular disease. Diabetes 56, 2655–2667.
Katzmarzyk, P.T., Church, T.S., Craig, C.L., and Bouchard, C. (2009). Sitting time and mortality from all causes, cardiovascular disease, and cancer. Med Sci Sports Exerc 41, 998–1005.
Rosenkranz, R.R., Duncan, M.J., Rosenkranz, S.K., and Kolt, G.S. (2013). Active lifestyles related to excellent self-rated health and quality of life: cross sectional findings from 194,545 participants in The 45 and Up Study. BMC Public Health 13, 1071.
Rovniak, L.S., Denlinger, L., Duveneck, E., Sciamanna, C.N., Kong, L., Freivalds, A., and Ray, C.A. (2014). Feasibility of using a compact elliptical device to increase energy expenditure during sedentary activities. Journal of Science and Medicine in Sport 17, 376–380.
Schmid, D., and Leitzmann, M.F. (2014). Television Viewing and Time Spent Sedentary in Relation to Cancer Risk: A Meta-analysis. JNCI J Natl Cancer Inst 106, dju098.
Young, D.R., Reynolds, K., Sidell, M., Brar, S., Ghai, N.R., Sternfeld, B., Jacobsen, S.J., Slezak, J.M., Caan, B., and Quinn, V.P. (2014). Effects of Physical Activity and Sedentary Time on the Risk of Heart Failure. Circ Heart Fail 7, 21–27.

“Where is everybody?”


Don’t get too excited about finding E.T. just yet. Get excited about the engineering.

A few days ago NASA had a press conference moderated by NASA Chief Scientist Ellen Stofan. The filtered headline that eventually made its way into the popular consciousness of the internet is that the discovery of extraterrestrial life is a paltry couple decades away. The way the conference was parsed into news form range from the relatively guarded “NASA scientists say they’re closer than ever to finding life beyond Earth” at the LA Times to the more sensational “NASA: ALIENS and NEW EARTHS will be ours inside 20 years” at the The Register. As statements, the former headline is almost unavoidably true given an assumption that humans eventually stumble upon life off-planet, and the latter is only one more over-capitalised word from being wholly fantastic. Neither actually touches on the content of the NASA press conference.

The impetus of the conference was partially fueled by an April announcement of a discovery by the Kepler program of the Earth-similar Kepler 186f, which happens to reside in the habitable zone of its siminymous parent star. Although Kepler 186f definitely might be sort of a bit more Earth-like, its discovery was only the latest in a long list of over 1800 exoplanets posited to exist to date. Although the techniques for exoplanet discovery planetary transit attributable stellar dimming, are not infalliable [paywalled primary source], the continued refinement of modern signal processing for unearthing (heh) exoplanet planet signatures makes this an exciting time to look skyward.

The speakers took a broad view of progression toward answering the question “are we alone?” John Grunsfeld, Hubble mechanic extraordinaire, emphasised the approach of looking for spectral signals corresponding to bio-signatures with the upcoming James Webb telescope. Of course, the terracentric focus shared by the panel means that NASA plans to look for signals associated with Earth life: water, methane, oxygen, etc. Carl Sagan et al. considered the task of finding similar biosignatures on Earth itself. Looking for signs we know to be associated with our own personal experience of life is our best current guess for what we should be looking for, but no guarantee exists that it is the right one. We are no longer too enthralled by the idea of trading arsenate for phosphate, but our own planet has plenty of examples of strange metabolism, that we should expect life off planet to consist of more peculiar possibilities. Imagine our chagrin if we spend a few centuries looking for spectral signatures of water before stumbling across hydrophobic biochemistry on Titan.

Many of us may remember the nanobe-laden Martian meteorite ALH84001 that touched off a burst of interest and a flurry of Mars probes in the latter half of the 1990s. Like the 100-200 nm fossilised “bacteria” in the Mars meteorite, the tone suggesting imminent discovery of extraterrestrial life (particularly the sensationalist coverage by the lay press) serves as nothing more than hyperbolic rhetoric. If this effect carries over to those with a hand on the purse-strings, so much the better, but don’t get too caught up as a member of the scientifically literate and generally curious public. The likelihood of finding life outside our own planet in a given time span is essentially impossible to predict with no priors, hence the famous Fermi’s paradox which graces the title of this post. The actual content of the video is much more important than the wanton speculation that fuels its press coverage.

A major advantage of placing the Hubble space telescope above the atmosphere was to avoid optical aberrations generated by atmospheric turbulence. The present state of the art in adaptive optics and signal processing essentially obviates this need, as ground-based telescopes such as the Magellan II in Chile can now outperform the Hubble in terms of resolution. The James Webb will offer some fundamentally novel capabilities in what it can see, with a 6.5m primary mirror and sensors sensitive to wavelengths from 600 nanometre red to the mid infrared at 28 microns.

The upcoming TESS survey, described by McArthur Fellow Sarah Seager, will use the same basic technique-observing planetary transits-as the Kepler mission to look for exoplanets. TESS will launch in 2017, slightly in advance of the main attraction of JWST. Looking for planetary transits has served us well in the past, but direct imaging is the holy grail. Seager described a starshade for occluding bright, planet-hosting stars to further that goal as part of the New Worlds mission. The design resembles a sunflower in pattern rather than a circular shade, the latter would introduce airy rings from diffaction around the edges, and desert tests of the prototypes have been encouraging so far. The precision engineering of the shade unfolding is another masterpiece. Due to its size, deployment cannot be tested in a terrestrial vacuum chamber, requiring its engineering to be all the more precise. I could see scale versions of the design as parasols doing quite well in the gift shop.


Image from NASA via Wikipedia

The natural philosophy that we now call science has roots in the same fundamental questions as “regular” philosophy. “Are we alone?” Is really just a proxy for “Where are we, how does it work, and why are we here?” Without any definitive answers to these questions on the horizon, I think we can safely say that building the machines that allow us to explore them and conditioning our minds in order to think about our universe is a pretty good way to spend our time. It will be a lonely universe if we find ourselves to be a truly unique example of biogenesis, but not so lonely in the looking.

As for yours truly, I’m looking forward to the “Two Months of Terror” (to quote Grunsfeld), October-December 2018, as the James Webb telescope makes its way to the L2 Lagrange point to unfold and cool in preparation for a working life of precipitous discovery.

Link to video

Ellen Stofan- Chief Scientist, NASA
John Grunsfeld- Astrophysicist, former astronaut, Hubble mechanic
Matt Mountain- Director: Space telescope Science Institute
John Mather- Project scientist James Webb telescope, 2006 physics Nobel laureate
Sarah Seager- Astrophysicist, MIT Principal Investigator, McArthuer fellow 2013
Dave Gallagher Electrical Engineer, Director of Astronomy and Physics at Jet Propulison Laboratory

Also read up on ESA projects: the Herschel Space Observatory, observing at 60 to 500 microns, and Gaia, a satellite set to use parallax to generate a precise galactic census.

Top image by the author

Good Seeing


Long the purview of telescopes, the dynamic mirrors and wavefront engineering that enables astronomers to calm the night sky’s twinkle are now founding applications in biological microscopy as well. The techniques, termed adaptive optics, are leading to major improvents in the clarity and depth imaging capabilities of today’s microscopes.

The eyes may or may not be the windows to the soul, but our ocular world plays a central role in how our minds are built. Of all human senses, sight is the most influential to our worldview, the most relied-upon to ascertain the validity of our guesses about reality. Galileo’s observations showed us our place in the cosmos by providing the givens to test the conflicting ideas from Ptolemy and Aristotle against Copernicus and Kepler. Hooke, Leeuwenhoek, and Spinoza seeded the scientific landscape with observations that would provide an alternative to the commonly held belief that disease causes were malarial, that is a result of “bad air,” and preventable by applied fragrance. Even the word “cell,” the fundamental building block of life, was concocted as Robert Hooke viewed the organization of a slice of cork through the lens of a microscope.

Humans are lucky to live under a dense atmosphere, keeping us warm and respiring while it protects us from (most) meteors drawn to our gravity well. The downside is that Earthbound astronomers are like a swimmer watching a birthday party from the bottom of a pool, an assuredly poor choice of viewpoints. The dense, turbulent atmosphere of the Earth confounded observation of especially dim extraterrestrial objects, until a few decades ago when deformable mirrors were introduced to astronomical telescopes to counteract atmospheric aberration. Before adaptive optics, astronomers were limited in the tools available to counteract the atmosphere, and made do by building observation centres at high-altitudes and waiting for “good seeing” conditions to attenuate the blurring of active air. In contrast, a modern adaptive optics telescope can best the resolution of the famous Hubble space telescope, as in the case of the adaptive optics-enabled Magellan II located in Chile.
Astronomers have to look out through the thick soup of the atmosphere, but biological microscopists looking into tissues have even more challenges to contend with. Like trying to peer through a nice cup of milky chai, trying to look at living cells in vivo is a major hurdle to determining the nature of life intact and in action. This has led to the adaptation of the same techniques previously developed to take out the night’s twinkle for use in microscopy, now used to obviate the blur of microscopical imaging at depth.

The problem of imaging through a highly aberrating medium is experienced twice when imaging into tissues: once on the illumination side of the path and again as the signal leaves the sample. Optimising the amount of light that illuminates the desired depth and location and then successfully making it back to a point detector or image sensor determines the clarity and speed with which an image can be formed.

In microscope design there are three characteristics to optimise: temporal resolution (speed), spatial resolution, and depth (signal to noise). Improving one aspect of an instrument invariably leads to a decrease in another quality. Anton van Leuwenhoek made incredible observations using instruments made by carefully melting pulled strands of glass in a flame, and setting the resulting aspherical lenses in pinhole brass frames. The tiny, single lens instruments bear more resemblance to a magnifying glass than to a modern compound microscope. Using one was a matter of holding the entire instrument a few centimetres from the eye and squinting through a tiny aperture at the subject, generally illuminated by sunlight. Game changing innovations that lead the way to improve the overall capability of microscopy beyond zero-sum trade-offs in design optimization are few and far between. It is becoming increasingly clear as the technology matures that adaptive optics is fundamentally enabling for imaging tasks that were not possible with the instruments of a decade ago.

One realm in which adaptive optics is finding ready application is in optical imaging of the living brains of model organisms. Brain imaging was the same inspiration that led Marvin Minsky to invent the confocal microscope in the late 1950s. In confocal microscopy, a pinhole plate rejects the majority of light arising from out of focus areas as the microscope beam is scanned throughout the sample of interest. The pinhole plate ensures that out-of-focus light is rejected, but there is a fundamental limit to how much total optical power can be pumped into tissue before damage occurs. Thanks to the pinhole plate, improving confocal imaging with adaptive optics is straightforward as: any increase in the signal making it to the detector indicates a positive correction for aberrations. Therefore, optimising the dynamic elements of the microscope is a matter of producing the maximum signal.


Beam shaping and point-spread function engineering enable new imaging modalities in microscopy.

The application of adaptive optics to neuroimaging instills a strong sense of the cutting-edge, combining brain science with optical physics, but some areas adaptive optics has made a striking impact may seem are much more domestic. Researchers at Durham Univerity, UK employed adaptive optics confocal microscopy to measure the effects of temperature on the activity of cold-water lipases, enzymes that break down fat and grease. Enzyme names are one of the rare cases of scientific nomenclature being intuitive and informative, the name ‘lipase’ identifies the enzyme’s substrate as lipids, or fats, and the activity as cutting them apart is denoted by the suffix “-ase”. Much like a greasy fingerprint on a pair of sunglasses, the very presence of the substrate induces unwanted blurring amenable to correction with dynamic optical elements.

The state of the art is no longer limited to improving the precision and design of static optics. Dynamic elements allow the microscope and the microscopist to adapt to specific imaging situations, a task for which algorithms and image processing are essential. The computational brains of modern microscopes are integral components of the optical system, as essential as the lenses and mirrors that make up the physical hardware.

In pushing the limits of the types of scientific questions that can be addressed with light, there’s no requirement to generate a two-dimensional image. Rather, the data required to test a given hypothesis may exist as a three-dimensional construct, four-dimensional volume plus time, or even higher dimensionality. Although visualisation of data will continue to be important for science communication, the central role of the image in science may soon take a back seat to generalised, multidimensional data. Paralleling this shift, the next generation of light microscopes will also look radically different than our conventional expectations. The shift has already appeared in commercially available microscopes: the computer is so integral to the light sheet microscope made by German optics giant Zeiss, that the instrument does not have eyepieces. The microscopes we use tomorrow will resemble modern microscopes to the same extent as modern microscopes remind us of Leuwenhoek’s handlenses.


In short order that shiny new confocal system may share the fate of this Leuwenhoek replica. This piece is part of the collection at the Oxford Museum of the History of Science.

Adaptive optics links:




Edit 2014/06/24: Fixed links