­A Skeptic Over Coffee – Young Blood


A tragic tale of a star-crossed pair,
science vs. a journalist’s flare

When reporting on scientific topics, particularly when describing individual papers, how important is it for the popular coverage to have anything to do with the source material? Let’s take a look at a recent science paper from Justin Rebo and others in Nature Communications and the accompanying coverage by Claire Maldarelli at Popular Science

Interest in parabiosis has increased recently due to coverage of scientific papers describing promising results in mice and the high-profile of some parabiosis enthusiasts. Parabiosis, from the Latin for “living beside”, typically has involved stitching two mice together. After a few days the fused tissue provides blood exchange through a network of newly formed capillaries.

The most recent investigation into the healing effects of youthful blood exchange from Rebo et al. expands the equipment list used for blood exchange beyond the old technique of duct-taping two animals together surgically joining two animals. Instead of relying on the animals to grow new capillary beds for blood exchange to occur, the authors of the new paper used a small pump to exchange a few drops of blood at a time until both mice had approximately the same proportion of their own blood and that of a donor and vice-versa.

According to the coverage from Popular Science:

While infusing blood from a younger mouse into an older mouse had no effect on the elderly mouse in the latest study, infusing blood from an older mouse into a younger one caused a host of problems in organs and other tissues.

Just a few paragraphs further Maldarelli quotes Conboy (last author on the paper) as saying “‘This study tells us that young blood, by itself, cannot work as medicine’.” In contrast, in the paper the authors state that “Importantly, our work on rodent blood exchange establishes that blood age has virtually immediate effects on regeneration of all three germ layer derivatives.” and later that “. . . extracorporeal blood manipulation provides a modality of rapid translation to human clinical intervention.”[1] There seems to be a bit of disagreement between the version of Conboy on the author list of the scientific article and the version of Conboy quoted in the PopSci coverage of the same article.

We also learned from Maldarelli that the tests reported in the paper were performed a month after completing the blood exchange procedure, but the longest duration from blood exchange to the experiment’s end (sacrifice for post-mortem tissue analysis) was 6 days after blood exchange.

I came across the PopSci coverage when it appeared on a meta-news site that highlights popular web articles, so it’s safe to assume I wasn’t the first to read it. Shouldn’t the coverage of scientific articles reported in the lay press have more in common with the source material than just buzzwords? The science wasn’t strictly cut and dried: not every marker or metric responded in the same way to the old/young blood exchange, and while I agree that we shouldn’t be encouraging anyone to build a blood-exchange rejuvenation pod in their garage, the findings of the article fell a long way from the conclusions reported in the lay-article: that young blood had no effect on the physiology of old mice. This is to say nothing about the quality of the paper itself and the confidence we should assign to the experimental results in the first place: with 12 mice total* and a p-value cutoff of 0.05 (1 out of every 20 experiments will appear significant at random), I’d take the original results with a grain of salt as well.

This is the face of science we show the public, and it’s unreliable. It is no easy task for journalists to accurately report and interpret scientific research. Deadlines are tight, and writers face competition and pressure from cheap amateur blogs and regurgitation feeds. “What can I do to help?” you ask. As a consumer of information you can demand scientific literacy in the science news you consume. Ask for writers to convey confidence and probability in a consisent way that can be understood and compared to other results by non-specialists. As a bare minimum, science and the press that covers it should at least have more in common than the latest brand of esoteric jargon.

If we only pay attention to the most outlandish scientific results, then most scientific results will be outlandish.

*The methods describe a purchase of 6 old and 6 young mice. However, elsewhere in the paper the groups are said to contain 8 mice each. Thus it is not clear how many mice in total were used in these experiments, and how they managed to create 12 blood exchange pairings for both control and experimental groups without re-using the same mice.

[1] Rebo, J. et al. A single heterochronic blood exchange reveals rapid inhibition of multiple tissues by old blood. Nat. Commun. 7, 13363 doi: 10.1038/ncomms13363 (2016).

A skeptic over coffee: who owns you your data?


“Everyone Belongs to Everyone Else”

-mnemomic marketing from Aldous Huxley’s Brave New World

A collaboration between mail-order genomics company 23andMe and pharmaceutical giant Pfizer reported 15 novel genes linked to depression in a genome-wide association study published in Nature. The substantial 23andMe user base and relative prevalence of the mental illness provided the numbers necessary to find correlations between a collection of single nucleotide polymorphisms (SNPs) and the condition.

This is a gentle reminder that even when the service isn’t free, you very well may be the product. It’s not just Google and Facebook whose business plans hinge on user data. From 23andMe’s massive database of user genetic information to Tesla’s fleet learning Autopilot (and many more subtle examples that don’t make headlines), you’re bound to be the input to a machine learning algorithm somewhere.

On the one hand, it’s nice to feel secure in a little privacy now and again. On the other, blissful technological utopia? If only the tradeoffs were so clear. Note that some (including bearded mo. bio. maestro George Church) say that privacy is a thing of the past, and that openness is the key (the 23andMe study participants consented that their data be used for research). We’ve known for a while that it’s possible to infer the sources of anonymous genome data from publicly available metadata.

The data of the every person are fueling the biggest changes of our time in transportation, technology, healthcare and commerce, and there’s a buck (or a trillion) to be made there. It remains to be seen if the benefits will mainly be consolidated by those who already control large pieces of the pie or to fall largely to the multitudes making up the crust (with plenty of opportunities for crumb-snatchers). On the bright side, if your data make up a large enough portion of machine learning inputs for the programs that eventually coalesce into an omnipotent AI, maybe there’ll be a bit of you in the next generation superorganism.

Through the strange eyes of a cuttlefish

A classic teaching example in black and white film photography courses is the tomato on a bed of leaves. Without the use of a color filter, the resulting image is low-contrast and visually un-interesting. The tomato is likely to look unnaturally dark and lifeless next to similarly dark leaves; although in a color photograph the colors make for a stark contrast, in fact the intensity values of the red and green of tomato fruit and leaves are nearly the same. The use of a red or green filter can attenuate the intensity of one of the colors, making it possible for an eager photographer to undertake the glamorous pursuit of fine-art salad photography.


The always clever cephalopods (smart enough to make honorary vertebrate status in UK scientific research) somehow manage to pull off a similar trick without the use of a photographer’s color filters. Marine biologists have been flummoxed for years by the ability of squid, cuttlefish, and octopuses* to effect exact color camouflage in complex environments, and their impressive use of color patterning in hunting and inter-species communication. The paradox is that their eyes (cephalopods, not marine biologists) only contain a single type of photoreceptor, rather than the two or more different color photoreceptors of humans and other color sensitive animals.

Berkeley/Harvard duo Stubbs & Son have put forth a plausible explanation for the age-old paradox of color camouflage in color-blind cephalopods. They posit that cephalopods use chromatic aberration and a unique pupil shape to distinguish colors. With a wide, w-shaped pupil, cephalopods potentially retain much of the color blurring of different wavelengths of light. Chromatic aberration is nothing more than color-dependent defocus, and by focusing through the different colors it is theoretically possible for the many-limbed head-foots to use their aberrated eyes as an effective spectrophotometer, using a different eye length to sharply focus each color. A cuttlefish may distinguish tomato and lettuce in a very different way than a black and white film camera or human eyes.


A cuttlefish’s take on salad

A cuttlefish might focus each wavelength sequentially to discern color. In the example above, each image represents preferential focus for red, green, and blue from top to bottom. By comparing each image to every other image, the cephalopod could learn to distinguish the colorful expressions of their friends, foes, and environment. Much like our own visual system automatically filters and categorizes objects in a field of view before we know it, much of this perception likely occurs at the level of “pre-processing,” before the animal is acutely aware of how they are seeing.


How a cuttlefish might see itself


A view of the reef.

A typical night out through the eyes of a cuttlefish might look something like this:

There are distinct advantages to this type of vision in specialized contexts. Using only one type of photoreceptor, light sensitivity is increased compared to the same eye with multiple types of photoreceptors (ever notice how human color acuity falls off at night?) Mixed colors would look distinctly different, and, potentially, individual pure wavelength could be more accurately distinguished. In human vision we can’t tell the difference between an individual wavelength and a mix of colors that happen to excite our color photoreceptors in the same proportions as the pure color, but a cuttlefish might be able to resolve these differences.

On the other hand, the odd w-shaped pupil of cephalopods retains more imaging aberrations in than a circular pupil (check out the dependence of aberrations on the pupil radius in the corresponding Zernike polynomials to understand why). As a result, cephalopods would have slightly worse vision in some conditions as compared to humans with the same eye size. Mainly those conditions consist of living on land. Human eye underwater are not well-suited to the higher refractive index of water as compared to air. We would also probably need to incorporate some sort of lens hood (e.g. something like a brimmed hat) to deal with the strong gradient of light formed from light absorption in the water, another function of the w-shaped cephalopod pupil.

Studying the sensory lives of other organisms provides insight into how they might think, illuminating our own vision and nature of thought by contrast. We may still be a long ways off from understanding how it feels to instantly change the color and texture of one’s skin, but humans have just opened a small aperture into the minds of cuttlefish to increase our understanding of the nature of thought and experience.

How I did it
Ever image is formed by smearing light from a scene according to the Point Spread Function (PSF) of the imaging system. This is a consequence of the wave nature of light and the origins of the diffraction limit. In Fourier optics, the point spread function is the absolute value squared of the pupil function. To generate the PSF, I thresholded and dilated this image of a common cuttlefish eye (public domain from Wikipedia user FireFly5), before taking the Fourier transform and squaring the result. To generate the images and video mentioned above, I added differential defocus (using the Zernike polynomial for defocus) to each color channel and cycled through the result three monochromatic images. I used ImageJ and octave for image processing.

Sources for original images in order of appearance:





And Movie S2

*The plural of octopus has all the makings of another senseless ghif/gif/zhaif controversy. I have even heard one enthusiast insist on “octopodes”

Bonus Content:


Primary color disks.

In particular, defocus pseudocolor vision would make for interesting perceptions of mixed wavelengths. Observe the color disks above (especially the edges) in trichromatic and defocus pseudo-color.



The aperture used to calculate chromatic defocus.

Bonus content original image sources:

Swimming cuttlefish in camouflage CC SA BY Wikipedia user Konyali43 available at: https://commons.wikimedia.org/wiki/File:Camouflage_cuttlefish_03.jpg

The aperture I used for computing chromatic defocus is a mask made from the same image as the top image for this post: https://en.wikipedia.org/wiki/File:Cuttlefish_eye.jpg

2017/05/03 – Fixed broken link to Stubbs & Stubbs PNAS paper: http://www.pnas.org/content/113/29/8206.full.pdf

Perspective across scales (Spores molds and fungus* – recap)

*Actually just lichens and a moldy avocado

Take your right hand and cover your left eye. Keeping both eyes wide open, look at an object halfway across the room. You can now “see through your hand.”** Your brain compiles the world around you into a single image that we intuitively equate with media such as photography and video, but in fact (as evidenced by your brain ignoring your hand occluding half your visual inputs) this mental image of the world is compiled from two different perspectives. Therefore, the processing side of the human visual system is very well set up to interpret sterographic images. Some people complain about this but you can always file a bug report with reality if it becomes too much trouble.

Human binocular vision works pretty well at scales where the inter-ocular distance provides a noticeable difference in perspective, but not for objects that are very close or very far away. This is why distant mountains look flat [citation needed], and we don’t have good spatial intuition for very small objects, either. Stereophotography can improve our intuition of objects outside of the scales of our usual experience. By modifying the distance between two viewpoints, we can enhance our experience of perspective

For these stereo photos of lichens, I used a macro bellows with a perspective control lens. This type of lens is use for fixing vanishing lines in architectural photography or for making things look tiny that aren’t, but in this case it makes a useful tool for shifting perspective by a few centimetres.



It would probably be easier to move the sample instead.


The images below require a pair of red blue filters or 3D glasses to shepherd a different perspective image into each eye, for spatial interpretation in your meat-based visual processor.






Another way to generate the illusion of dimensionality is parallax. This is a good way to judge depth when your eyes are on opposite sides of your head.





**If you currently have use of only a single eye, the same effect can be achieved by holding the eye of a needle or other object thinner than your pupil directly in front of the active eye. This is something that Leonardo (the blue one) remarked on, and suggests the similarities in imaging with a relatively large aperture (like your dilated pupil) and an “image” reconciled from multiple images at different perspectives, e.g. as binocular vision.

Super Gravity Brothers


The GW150914 blackhole merger event recorded by aLIGO, represented in a wavelet (morlet base) spectrogram. This spectrogram was based on the audio file released with the original announcment.

The data from the second detection, GW151226, is another beast entirely in that the signal is very much buried in the noise.

Raw data:


Wavelet Spectrogram: gw151226CWTspec

The LIGO Open Science Center makes these data available, along with signal processing tutorials.

Now to see how the professionals do it:

I used MATLAB’s wavelet toolbox for the visualisations, aided by this example

Gravitational wave observation GW150914 with black hole merger simulation

I couldn’t find anyone that had combined the gravitational wave chirp observed by LIGO with the simulated visualisation of the putative black hole merger by SXS, so I decided to give it a try myself. Consider it to be illustrative, rather than rigorous.

In the first run-through, the LIGO gravitational wave observation from 2015 Sept 14 (audio chirp) is speed and pitch adjusted to match the SXS visualisation. Mergers 2-5 adjust the SXS simulation to match the chirp, alternating between native and pitch-adjusted frequency to cater to human hearing.

LIGO observation:
Abbott, B. P. et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 116, 61102 (2016). https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102

Visualisation modified from Simulating eXtreme Spacetimes (SXS) Project http://www.black-holes.org
Source material used under CC-NC-BY licence (creativecommons.org). Feel free to reuse and remix, but retain attributions.

The structure behind the simplicity of CRISPR/Cas9


The International Summit on Human Gene Editing took place in Washington D.C. a few weeks ago, underlining the critical attention continuing to follow CRISPR/Cas9 and its applications to genome editing. Recently I compared published protocols for CRISPR/Cas9 and a competing technique based on Zn-finger nucleases. Comparing the protocols suggests editing with CRISPR/Cas9 is vaguely simpler than using Zn-fingers, but didn’t discuss the biomolecular mechanisms underlying the increased ease of use. Here I’ll illustrate the fundamental difference between genome editing with Cas9 in simple terms, using relevant protein structures from the Protein Data Bank.

Each of the techniques I’ll mention here have the same end-goal: break double stranded DNA in a specific location. Once a DNA strand undergoes this type of damage, a cell’s own repair mechanisms take over to put it back together. It is possible to introduce a replacement strand and encourage the cell to incorporate this DNA into the break, instead of the original sequence.

The only fundamental difference in the main techniques used for genome editing is the way they are targeted. Cas9, Zn-finger, and Transcription Activator Like (TAL) nucleases all aim to make a targeted break in DNA. Other challenges, such as getting the system into cells in the first place, are shared alike by all three systems.


Zinc Fingers (red) bound to target DNA (orange). A sufficient number of fingers like these could be combined with a nuclease to specifically cut a target DNA sequence.


Transcription Activator Like (TAL) region bound to target DNA. Combined with a nuclease, TAL regions can also effect a break in a specific DNA location.


Cas9 protein (grey) with guide RNA (gRNA, red) and target DNA sequence (orange). The guide RNA is the component of this machine that does the targeting. This makes the guide RNA the only part that needs to be designed to target a specific sequence in an organism. The same Cas9 protein, combined with different gRNA strands, can target different locations on a genome.

Targeting a DNA sequence with an RNA sequence is simple. RNA and DNA are both chains of nucleotides, and the rules for binding are the same as for reading out or copying DNA: A binds with T, U binds with A, C binds with G, and G binds with C [1]. Targeting a DNA sequence with protein motifs is much more complicated. Unlike with nucleotide-nucleotide pairing, I can’t fully explain how these residues are targeted, let alone in a single sentence. This has consequences in the initial design of the gRNA as well as the efficacy of the system and the overall success rate.

So the comparative ease-of-application stems from the differences in protein engineering vs. sequence design. Protein engineering is hard, but designing a gRNA sequence is easy.

How easy is it really?

Say that New Year’s Eve is coming up, and we want to replace an under-functioning Acetaldehyde Dehydrogenase [2] with a functional version. First we would need a ~20 nucleotide sequence from the target DNA, like this one from just upstream of the ALDH1B gene:


You can write out the base-pairings by hand or use an online calculator to determine the complementary RNA sequence:


To associate the guide RNA to the Cas9 nuclease, the targeting sequence has to be combined with a scaffold RNA which the protein recognises.

Scaffold RNA:

Target Complement:

Target complement + scaffold = guide RNA:

With that sequence we could target the Cas9 nuclease to the acetaldehyde dehydrogenase (ALDH1B) gene, inducing a break and leaving it open to replacement. The scaffold sequence above turns back on itself at the end, sinking into the proper pocket in Cas9, while the target complement sequence coordinates the DNA target, bringing it close to the cutting parts of Cas9. If we introduce a fully functional version of the acetaldehyde dehydrogenase gene at the same time, then we surely deserve a toast as the target organism no longer suffers from an abnormal build-up of toxic acetaldehyde. Practical points remain to actually prepare the gRNA, make the Cas9 protein, and introduce the replacement sequence, but from an informatic design point of view that is, indeed, the gist.

That’s the basics of targeting Cas9 in 1,063 words. I invite you to try and explain the intricacies of TAL effector nuclease protein engineering with fewer words.


[1] That’s C for cytosine, G for guanine, U for uracil, and A for adenine. In DNA, the uracil is replace with thymine (T).

[2] Acetaldehyde is an intermediate produced during alcohol metabolism, thought to be largely responsible for hangovers. A mutation in one or both copies of the gene can lead to the so-called “Asian Flush”.

Sources for structures:

I rendered all of the structures using PyMol. The data come from the following publications:

PDB structure: 3VEK (Zn-finger)

Wilkinson-White, L.E., Ripin, N., Jacques, D.A., Guss, J.M., Matthews, J.M. DNA recognition by GATA1 double finger.To Be Published

PDB structure: 3ugm (TAL)

Mak, A.N., Bradley, P., Cernadas, R.A., Bogdanove, A.J., Stoddard, B.L. The Crystal Structure of TAL Effector PthXo1 Bound to Its DNA Target. (2012) Science 335: 716-719

PDB structure: 4oo8 (Cas9)
Nishimasu, H., Ran, F.A., Hsu, P.D., Konermann, S., Shehata, S.I., Dohmae, N., Ishitani, R., Zhang, F., Nureki, O. Crystal structure of Cas9 in complex with guide RNA and target DNA. (2014) Cell(Cambridge,Mass.) 156: 935-949

Comic cover original source:
“Amazing Stories Annual 1927” by Frank R. Paul – Scanned cover of pulp magazine. Licensed under Public Domain via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Amazing_Stories_Annual_1927.jpg#/media/File:Amazing_Stories_Annual_1927.jpg

What’s the big deal with CRISPR/Cas9?


Cas9 (grey) in complex with yellow guide RNA and red target DNA. PDB structure 4oo8 manipulated in PyMOL by yours truly. Cas9, like competing genome editing technologies (TALENs and ZFNs), is a nucelase. Click to view animated GIF.

Summary: Eliminate hereditary diseases. Re-program pathological tissue. Design babies. Bring back the T. rex. The peril and promise of genetic engineering has been a long-time coming. Generally speaking, none of the wonders we began collectively imagining with the deduction of DNA structure in the 1950s have come to fruition. At the turn of the millenium with the completion of the human genome project(s), we expected personalized medicine to eradicate inefficacies and side effects in modern medicine. Current development based on bacterial immune systems promises to either revolutionise the treatment of genetic disease or fill the world with ten-foot tall babies shooting lasers out of their perfect blue eyes while playing professional basketball and winning Nobel Prizes.

My first foray into a wet lab consisted of a project straight out of the astounding futures your favourite sci-fis promised you- or warned you about: incorporating functional genetic elements from humans into fungal cells. After a summer spent pushing the limits of what is possible and blurring the lines of what it means to be human, I created a terrible organism neither man nor yeast. Unable to find acceptance among people and no longer satisfied by nature’s intentions, these fungal colonies, the bizarre offspring of one man’s twisted mind and leavening products found the cruel world to be too much and jumped into an autoclave while reciting Macbeth.

Despite the hyperbolic passage above, the monsters yet live. The strain ended up in a laboratory-grade freezer at negative eighty degrees (Celsius, of course, the lab being free of both astrologers and barbarians). The little yeasties are probably still chilling in the small cardboard box where I left them, covered in frost and enjoying a nice bath of glycerol cryo-protectant, traveling through time in suspended animation until the world is ready for them.

The human genes and their counterparts in baker’s yeast are similar enough that in this case one could substitute for the other (at least in one direction). The function of these metabolic keystones known as ATP synthases is an ancient one: churning the potential energy of an electron gradient to make the cellular energy storage molecule adenine triphosphate (ATP). They are primeval enough that the human version acts as a suitable stand-in for a strain of Saccharomyces cerevisiae otherwise incapable of aerobic respiration. I had precisely engineered a genetic vector that inserted directly into the location of the yeast’s genome where the native version had been removed. And by “precisely engineered” I mean that it was so easy, an undergrad could do it, as I did.

Recently a technique based on CRISPR (Clustered Regularly Interspersed Short Palindromic Repeats) and CRISPR-Associated Proteins (such as Cas9) has garnered a lot of attention in the press as well as the scientific community. The word-sequences up-regulating all the excitement highlight the ease and effectiveness of CRISPR/Cas9 over previous methods. The technique’s critical reception has run the full range from drooling anticipation to worried alarm to bad puns.

Since my early days in the lab playing as a god with design of human-yeast splices, I’ve continued down the rabbit-hole of biological scale to the point that I now work more often with the single molecule(s) of biomolecular machinery than with cells directly. So I’m certainly out of the loop and out of a practical grasp of the rational underlying CRISPR/Cas9 genome editing. After all, spider silk proteins have been produced in mammalian cells since before 2002, and are regularly produced in goat’s milk. Does CRISPR/Cas9 change the game to such a degree that warrants the flood of interest?


The interest surrounding CRISPR

I’ll skip over the high-level technical overviews that you’ve probably read before, but for those with the time and interest I can recommend Jennifer Doudna’s Breakthrough Prize lecture. Instead I’ll compare two protocols, the first based on CRISPR/Cas9 and the second based on an older technique using another type of engineered nuclease known as zinc-finger nucleases (ZFNs). I scraped both protocols from the same publication, so apparent differences due to style should be small. To get a sense of the complexity of each technique, here are the two protocols as wordle word-clouds, displaying the size of the 256 most frequently used words in each protocol according to their relative usage.


ZFN protocol: word frequency word cloud


CRISPR/Cas9 protocol: word frequency word cloud

The table below compare the complexity and length of either protocol. The reading complexity measures were generated with this tool, and in short the first measure decreases with increased complexity while the second two increase with added complexity.


At first glance we see that the CRISPR/Cas9 protocol is much longer and more complicated, but if we consider that the Zn-finger nuclease protocol only describes the process up to in vitro validation of the process, we can make a much more equivalent comparison by truncating the CRISPR/Cas9 protocol to the first 13 steps. The resulting comparison:


The associated Wordle even looks a bit friendlier.


So suffice it to say that it’s not easy to see the underpinnings of the excitement surrounding major developments such as CRISPR/Cas9. Essentially the advantages of the CRISPR-based approach stems from the level of difficulty of engineering guide RNAs versus engineering DNA-binding domains based on amino acid residues required for competing techniques ZFNs and TALENs (not compared here). In the brewer’s yeast I modified “back in the day,” targeting the desired genes to the desired location was as simple as including a sequence from the target location on the DNA to be inserted; there are sufficient double-stranded breaks in a flask of yeast culture to allow the gene to find its target a few times. With the specifically targetable nucleases such as Cas9, Zinc-finger nucleases and TALENs, one doesn’t have to count on such an easy model organism to precisely manipulate a small number of cells for a desired change to the genome.

The increased interest alone is sure to drum up funding, public intrigue, and private investment, driving the impact forward as a self fulfilling prophecy. The more interested and excited people are for CRISPR/Cas9, particularly those people with the deep pockets to fill out scientists’ salaries, the more the technique will be subjected to use and refinement. More people using the tool drives the potential for meaningful breakthroughs. On the other hand, we have been promised and warned of this same onrushing biopunk dystopia before, and as they say: if this is the future, where are my gene-driven superpowers?


Published protocols referenced in this post:
[1] Carroll, D., Morton, J. J., Beumer, K. J., & Segal, D. J. (2006). Design, construction and in vitro testing of zinc finger nucleases. Nature Protocols, 1(FEBRUARY 2006), 1329–1341. http://doi.org/10.1038/nprot.2006.231

[2] Ran, F. A., Hsu, P. P. D., Wright, J., Agarwala, V., Scott, D. a, & Zhang, F. (2013). Genome engineering using the CRISPR-Cas9 system. Nature Protocols, 8(11), 2281–308. http://doi.org/10.1038/nprot.2013.143

[2015/12/14 EDIT – copyediting]

Feynman’s take on prizes/Feynman takes the prizes


“Photo by Tamiko Thiel 1984*

Richard Feynman was known as much (nay, definitely more) for his personality and his approach to science as a generalist than for his contributions to quantum electrodynamics. Feynman was infamous for his skepticism concerning awards, honors, prizes, and the like.

“Interviewer: Was it worth the Nobel Prize?”

“RF: I don’t know anything the Nobel Prize, I don’t understand what it’s all about or what’s worth what. If the people in the Swedish Academy of Sciences think x,y or z wins the Nobel prize, then so be it. . .”

“. . . I’ve already got the prize! The prize is the pleasure of finding the thing out, the kick in the discovery, the observation that other people use it, those are the real things. The honors are unreal to me. . .”

Putting a scientific career over science is a mistake leading a life toward common drudgery. The jolt of discoveries, be they great or unacknowledged outside one’s own mind and notebook, is the reward. Noble season should serve as a reminder, not a distraction, from the reality that there is a deeper meaning to the work of scientist than publish or perish.

Excerpt from BBC interview with Feynman, uploaded by youtube user batxg3

Congratulations go out to this year’s winners. May that the Prize fails to occlude the science you have yet to do.

Licensing of the photograph from http://en.wikipedia.org/wiki/File:RichardFeynman-PaineMansionWoods1984_copyrightTamikoThiel_bw.jpg
*This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

You are free:

to share – to copy, distribute and transmit the work
to remix – to adapt the work

Under the following conditions:

attribution – You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
share alike – If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.