Structured illumination: the Bruce Wayne of the super-resolution league?

noBell

A week before Eric Betzig shared the Nobel Prize with Stefan Hell and William Moerner for super-resolution fluorescence microscopy, I listened to him give a talk at an imaging conference in Edinburgh, Scotland. The talk focused on Structured Illumination Microscopy (SIM). The idea that SIM does not belong in the same category as STED and super-localisation techniques, Betzig repeatedly stressed, is ludicrous. Betzig is so convinced of this notion that his group has moved to focus on developing applications of SIM for live imaging.

The best image resolution obtained by SIM is only about twice as good as that imposed by the normal diffraction limit, paling in comparison to the hundred times improvement sometimes seen by STED, but SIM is faster and runs on a more efficient light budget than the rest of the super-resolution stable. This creates non-trivial advantages when the subject is alive and preferred to stay that way. Biologists can learn a lot from studying something which is formerly alive, but much more from cells in the dynamic travails of life.

If Betzig is convinced working with SIM is more amenable to practical application than other super-resolution techniques, such as Photo-Activated Localisation Microscopy, the technique that won him the Nobel Prize, why was it left out when it came time for the Swedish Academy of Sciences to recognise super-resolution? The answer may lie more in the rules and peculiarities surrounding the awarding of a Nobel than on the scientific relevance and impact, but you wouldn’t guess that from reading the Scientific Background on the Nobel Prize in Chemistry 2014.

In the published view of the Kungl Vedenskapsakademien, SIM, “Although stretching Abbe’s limit of resolution,” remains ”confined by its prescriptions.” In other words, the enhancement beyond the diffraction limit achieved by structured illumination is just not super enough. In principle the resolution of STED can be improved without limit by switching your depletion laser from “stun” to “kill” (i.e. increasing the depletion intensity). Likewise, super-localisation is essentially a matter of taking a large number of images of blinking fluorescent tags. Improving the effective resolution in super-localisation is a case of tuning the chemistry of your fluorescence molecules and taking an enormous amount of images. In reality, practical problems prevent further resolution improvement long before the capabilities of these techniques reach the resolution of a Heisenberg’s microscope, for example. However, SIM is subject to an “aliasing limit,” which, for the nonce, seems to be as hard and fast as Abbe’s and Rayleigh’s resolution criteria were (and largely still are, with the exception of fluorescence techniques) for the last hundred years.

As a rule with only one exception I know about, a Nobel Prize is not awarded post-mortem. Despite the justification proffered in the official background, Mats Gustaffson’s untimely death in 2011 may have played a major role in the exclusion of super-resolution structured illumination microscopy. Combined with the cap of three people sharing a single Prize, this left Rainer Heintzmann and the late Mats Gustaffson without Nobel recognition of their contributions to super-resolution. Even with the somewhat arbitrary adjudication over what it is to truly “break” the diffraction limit, it seems curious that one of the super-res laureates has moved almost entirely away from the prize-winning technique he invented, preferring instead the under-appreciated SIM. The Nobel Prize is arguably the penultimate distinction in scientific endeavor, and it seems beneath the station of the prize for its issue to be governed so strictly by arbitrary statutes. Then again, the true reward of scientific achievement is not a piece of gold and a pile of kronor, but the achievement itself. The universe isn’t altered whether you win the Nobel Prize for uncovering one of its little secrets, the truth of the secret will remain regardless.

‘Anonymous’ has an intriguing comment about why super-resolution is still not finding common use in biology research here.

Interesting note: unlike STED and PALM/PAINT/STORM, structured illumination can be applied to quantitative phase imaging

The original version of the bell .svg file is from: http://commons.wikimedia.org/wiki/File:H%C3%A9raldique_meuble_Cloche.svg

Seeing at Billionths and Billionths

Sketch10381924
This was my (unsuccessful) entry into last year’s Wellcome Trust Science Writing Prize. It is very similar to the post I published on the 24th of June, taking a slightly different bent on the same theme.

The Latin word skopein underlies the etymology of a set of instruments that laid the foundations for our modern understanding of nature: microscopes. References to the word are recognizable across language barriers thanks to the pervasive influence of the ancient language of scholars, and common usage gives us hints as to the meaning. We scope out a new situation, implying that we not only give a cursory glance but also take some measure or judgement.

Drops of glass held in brass enabled Robert Hooke and Anton van Leuwenhoek to make observations that gave rise to germ theory. Light microscopy unveiled our friends and foes in bacteria, replacing humours and miasmas as primary effectors driving human health and disease. The concept of miasmatic disease, a term that supposes disease is caused by tainted air, is now so far-fetched the term has been almost entirely lost to time. The bird-like masks worn by plague doctors were stuffed with potpourri: the thinking of the time was that fragrance alone could protect against the miasma of black death. The idea seems silly to us now, thanks to the fruits of our inquiry. The cells described by Hooke and “animalcules” seen by Leuwenhoek marked a transition from a world operated by invisible forces to one in which the mechanisms of nature were vulnerable to human scrutiny. In short, science was born in the backs of our eyes.

The ability of an observer using an optical instrument to differentiate between two objects has, until recently, been limited by the tendency of waves to bend at boundaries, a phenomenon known as diffraction. The limiting effects of diffraction were formalised by German physicist Ernst Abbe in 1873. The same effect can be seen in water ripples bending around a pier.

If the precision of optical components is tight enough to eliminate aberrations, and seeing conditions are good enough, imaging is “diffraction-limited.” With the advent of adaptive optics, dynamic mirrors and the like let observers remove aberrations from the sample as well as the optics. Originally developed to spy on dim satellites through a turbulent atmosphere, adaptive optics have recently been applied to microscopy to counteract the blurring effect of looking through tissue. If astronomy is like looking out from underwater, microscopy is like peering at the leaves at the bottom of a hot cuppa, complete with milk.

Even with the precise control afforded by adaptive optics the best possible resolution is still governed by the diffraction limit, about half the wavelength of light. In Leuwenhoek’s time the microbial world had previously been invisible. In the 20th century as well, the molecular machinery underpinning the cellular processes of life have been invisible, smeared by the diffraction limit into an irresolvable blur.

A human cell is typically on the order of about ten microns in diameter. The proteins, membranes, and DNA structure is organised at a level about one-thousandth as large, in the tens and hundreds of nanometres. In a conventional microscope, information at this scale is not retrievable thanks to diffraction, but it underlies all of life. Much of the mechanisms of disease operate at this level as well, and knowledge about how and why cells make mistakes has resounding implications for cancer and aging. In the past few decades physicists and microscopists have developed a number of techniques to go beyond the diffraction limit to measure the nanometric technology that makes life.

A number of techniques have been developed to surpass the diffraction barrier. The techniques vary widely in their use of some form or another of engineered illumination and/or engineered fluorescent proteins to make them work. The thing they have in common is computation: the computer has become as important of an optical component as a proper lens.

New instrumentation enables new measurements at the behest of human inquiry. Questions about biology at increasingly small spatial scales under increasingly challenging imaging contexts generate the need for higher precision techniques, in turn loosening a floodgate on previously unobtainable data. New data lead to new questions, and the cycle continues until it abuts the fundamental laws of physical nature. Before bacteria were discovered, it was impossible to imagine their role in illness and equally impossible to test it. Once the role was known, it became a simple intuitive leap for Alexander Fleming to suppose the growth inhibition of bacteria by fungi he saw in the lab might be useful as medicine. With the ability to see at the level of tens of nanometres, another world of invisible forces has been opened to human consideration and innovation. Scientists have already leaped one barrier at the diffraction limit. With no fundamental limit to human curiosity, let us consider current super-resolution techniques as the first of many triumphs in detection past limits of what is deemed possible.

Does STED really break the law?

1DcompSphereDefocusLess

Ernst Abbe, legendary German physicist of yore, defined the resolution of a microscope as a function of the ratio of the objective aperture radius to the working distance of the image object. Meaning that the closer the image object (the shorter the focal length) and the wider the objective lens is, the greater the resolution, all else being equal. It’s commonly referred to as “Abbe’s diffraction limit”-a fundamental, physical limit to our ability to form images with lenses. Wavelength plays into it too, and that’s why the longer wavelengths used in say, two-photon microscopy can only resolve different light sources over a longer distance.

I remember sitting in a neurology seminar last year, the topic was the imaging capabilities of STED (STimulated Emission Depletion) confocal microscopy-a superresolution technique-for living neurons. Immediately prior to the section where the method was described (a doughnut-shaped point-spread function photobleaches nearby fluorophores, preventing them from contributing to the intensity measurement at a given point) was a cartoon involving a police officer and some sort of criminal hijinks taking place. The title of the slide was something along the lines of “How to break the law.”

As one audience member was quick to point out, the fluorescence emission (from the middle of the doughnut-shaped stimulated emission PSF) still must travel back through the objective and all the other optics, convolving with the various transfer functions of the lens elements at every step of the way. How can this be said to be breaking the diffraction limit? I agree.

Because a confocal microscope is a scanning apparatus, it only illuminates and records a single point at a time. The illumination path has a point spread function just as in the imaging path, so normally a fairly large volume, capable of containing many fluorophores, is illuminated and contributes to the light brought to focus by the imaging path. By bleaching a large volume around surrounding the region of interest, the only fluorophores capable of excitation by the non-toroidal excitation PSF reside in this central volume. Although the PSF incident on the photodetector is still diffraction limited, the only fluorophores contributing to the signal are in that small, central, un-depleted volume.

I’m not convinced that this corresponds to superresolution, at least not without qualification. It certainly allows you to superlocalize the fluorophores, but resolution in microscopy is differentiating between two proximal light sources in space and, I think this is crucial, in time. So there is a time scale involved in STED, and all scanning methods, for which two objects can be resolved, as long as their movements are much slower than the scan. A highly dynamic process like cell membrane fluctuations would probably fall outside of the processes that could be imaged by STED. I think a more suitable term would be appropriate for this and similarly limited techniques.

I don’t make this claim simply because I’m catching my grumps early this year. I think the terminology is genuinely misleading. Consider the title of this paper: “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function” [1].
They demonstrate an interesting method for recovering depth in a widefield microscope, and they do pinpoint a dense population of fluorophores, some of which are only a few nanometres apart. But the fluorophores are effectively immobilized, and the entire imaging processing takes 450 seconds.
“Resolution” and “sensitivity” are two very different things, and under certain constraints you can use one to inform the other. The title in this case misleads the reader to assume that physical laws are being broken, which is not the case. In particular this mild misdirection will lead undergraduates and laypeople to misinterpret the claims of science. We should take efforts to avoid it, even if your rival is pulling in grant money with these sorts of “impossible” claims. After all, I’m sure that Ernst Abbe wouldn’t buy it.

[1]S.R.P. Pavani, M.A. Thompson, J.S. Biteen, S.J. Lord, N. Liu, R.J.Twieg, R. Piestun, W.E. Moerner. Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc. Nat. Ac. Sci. 106:9. (2009)

Note: The 3D localization of dense fluorophores they report, again by multiple rounds of partial photoactivation and photobleaching, was generated over 30 cycles of 30 exposures, each frame consisting of a 500 ms exposure. That’s over five minutes of acquisition time.