Through the strange eyes of a cuttlefish

A classic teaching example in black and white film photography courses is the tomato on a bed of leaves. Without the use of a color filter, the resulting image is low-contrast and visually un-interesting. The tomato is likely to look unnaturally dark and lifeless next to similarly dark leaves; although in a color photograph the colors make for a stark contrast, in fact the intensity values of the red and green of tomato fruit and leaves are nearly the same. The use of a red or green filter can attenuate the intensity of one of the colors, making it possible for an eager photographer to undertake the glamorous pursuit of fine-art salad photography.

Caprese_cherry_tomatoesBWColourComparison

The always clever cephalopods (smart enough to make honorary vertebrate status in UK scientific research) somehow manage to pull off a similar trick without the use of a photographer’s color filters. Marine biologists have been flummoxed for years by the ability of squid, cuttlefish, and octopuses* to effect exact color camouflage in complex environments, and their impressive use of color patterning in hunting and inter-species communication. The paradox is that their eyes (cephalopods, not marine biologists) only contain a single type of photoreceptor, rather than the two or more different color photoreceptors of humans and other color sensitive animals.

Berkeley/Harvard duo Stubbs & Son have put forth a plausible explanation for the age-old paradox of color camouflage in color-blind cephalopods. They posit that cephalopods use chromatic aberration and a unique pupil shape to distinguish colors. With a wide, w-shaped pupil, cephalopods potentially retain much of the color blurring of different wavelengths of light. Chromatic aberration is nothing more than color-dependent defocus, and by focusing through the different colors it is theoretically possible for the many-limbed head-foots to use their aberrated eyes as an effective spectrophotometer, using a different eye length to sharply focus each color. A cuttlefish may distinguish tomato and lettuce in a very different way than a black and white film camera or human eyes.

tomatoRGBcuttleVision

A cuttlefish’s take on salad

A cuttlefish might focus each wavelength sequentially to discern color. In the example above, each image represents preferential focus for red, green, and blue from top to bottom. By comparing each image to every other image, the cephalopod could learn to distinguish the colorful expressions of their friends, foes, and environment. Much like our own visual system automatically filters and categorizes objects in a field of view before we know it, much of this perception likely occurs at the level of “pre-processing,” before the animal is acutely aware of how they are seeing.

cuttleVisionKalamar

How a cuttlefish might see itself

seaCottonComp

A view of the reef.

A typical night out through the eyes of a cuttlefish might look something like this:

There are distinct advantages to this type of vision in specialized contexts. Using only one type of photoreceptor, light sensitivity is increased compared to the same eye with multiple types of photoreceptors (ever notice how human color acuity falls off at night?) Mixed colors would look distinctly different, and, potentially, individual pure wavelength could be more accurately distinguished. In human vision we can’t tell the difference between an individual wavelength and a mix of colors that happen to excite our color photoreceptors in the same proportions as the pure color, but a cuttlefish might be able to resolve these differences.

On the other hand, the odd w-shaped pupil of cephalopods retains more imaging aberrations in than a circular pupil (check out the dependence of aberrations on the pupil radius in the corresponding Zernike polynomials to understand why). As a result, cephalopods would have slightly worse vision in some conditions as compared to humans with the same eye size. Mainly those conditions consist of living on land. Human eye underwater are not well-suited to the higher refractive index of water as compared to air. We would also probably need to incorporate some sort of lens hood (e.g. something like a brimmed hat) to deal with the strong gradient of light formed from light absorption in the water, another function of the w-shaped cephalopod pupil.

Studying the sensory lives of other organisms provides insight into how they might think, illuminating our own vision and nature of thought by contrast. We may still be a long ways off from understanding how it feels to instantly change the color and texture of one’s skin, but humans have just opened a small aperture into the minds of cuttlefish to increase our understanding of the nature of thought and experience.

How I did it
Ever image is formed by smearing light from a scene according to the Point Spread Function (PSF) of the imaging system. This is a consequence of the wave nature of light and the origins of the diffraction limit. In Fourier optics, the point spread function is the absolute value squared of the pupil function. To generate the PSF, I thresholded and dilated this image of a common cuttlefish eye (public domain from Wikipedia user FireFly5), before taking the Fourier transform and squaring the result. To generate the images and video mentioned above, I added differential defocus (using the Zernike polynomial for defocus) to each color channel and cycled through the result three monochromatic images. I used ImageJ and octave for image processing.

Sources for original images in order of appearance:

https://en.wikipedia.org/wiki/File:Cuttlefish_eye.jpg

https://commons.wikimedia.org/wiki/File:Caprese_cherry_tomatoes.JPG

https://en.wikipedia.org/wiki/File:Kalamar.jpg


https://en.wikipedia.org/wiki/Coral_reef#/media/File:Sea_Cotton.jpg

And Movie S2

*The plural of octopus has all the makings of another senseless ghif/gif/zhaif controversy. I have even heard one enthusiast insist on “octopodes”

Bonus Content:

RGBTest

Primary color disks.

In particular, defocus pseudocolor vision would make for interesting perceptions of mixed wavelengths. Observe the color disks above (especially the edges) in trichromatic and defocus pseudo-color.

camoCuttle03

cuttleW

The aperture used to calculate chromatic defocus.

Bonus content original image sources:

Swimming cuttlefish in camouflage CC SA BY Wikipedia user Konyali43 available at: https://commons.wikimedia.org/wiki/File:Camouflage_cuttlefish_03.jpg

The aperture I used for computing chromatic defocus is a mask made from the same image as the top image for this post: https://en.wikipedia.org/wiki/File:Cuttlefish_eye.jpg

2017/05/03 – Fixed broken link to Stubbs & Stubbs PNAS paper: http://www.pnas.org/content/113/29/8206.full.pdf

Perspective across scales (Spores molds and fungus* – recap)

*Actually just lichens and a moldy avocado

Take your right hand and cover your left eye. Keeping both eyes wide open, look at an object halfway across the room. You can now “see through your hand.”** Your brain compiles the world around you into a single image that we intuitively equate with media such as photography and video, but in fact (as evidenced by your brain ignoring your hand occluding half your visual inputs) this mental image of the world is compiled from two different perspectives. Therefore, the processing side of the human visual system is very well set up to interpret sterographic images. Some people complain about this but you can always file a bug report with reality if it becomes too much trouble.

Human binocular vision works pretty well at scales where the inter-ocular distance provides a noticeable difference in perspective, but not for objects that are very close or very far away. This is why distant mountains look flat [citation needed], and we don’t have good spatial intuition for very small objects, either. Stereophotography can improve our intuition of objects outside of the scales of our usual experience. By modifying the distance between two viewpoints, we can enhance our experience of perspective

For these stereo photos of lichens, I used a macro bellows with a perspective control lens. This type of lens is use for fixing vanishing lines in architectural photography or for making things look tiny that aren’t, but in this case it makes a useful tool for shifting perspective by a few centimetres.

Macr

stereoMacroLens1

It would probably be easier to move the sample instead.

stereoMacroSample

The images below require a pair of red blue filters or 3D glasses to shepherd a different perspective image into each eye, for spatial interpretation in your meat-based visual processor.

niceLichenAnaglyph

lichenAgainAnaglyph

anotherLichenAnaglyph

avocadoMold

curledLichenTM2016June

Another way to generate the illusion of dimensionality is parallax. This is a good way to judge depth when your eyes are on opposite sides of your head.

DSC_0042

DSC_0072

DSC_0051

curledLichenTM2016JuneGIF

**If you currently have use of only a single eye, the same effect can be achieved by holding the eye of a needle or other object thinner than your pupil directly in front of the active eye. This is something that Leonardo (the blue one) remarked on, and suggests the similarities in imaging with a relatively large aperture (like your dilated pupil) and an “image” reconciled from multiple images at different perspectives, e.g. as binocular vision.

Super Gravity Brothers

GW150914MorletSpec

The GW150914 blackhole merger event recorded by aLIGO, represented in a wavelet (morlet base) spectrogram. This spectrogram was based on the audio file released with the original announcment.

The data from the second detection, GW151226, is another beast entirely in that the signal is very much buried in the noise.

Raw data:

gw151226

Wavelet Spectrogram: gw151226CWTspec

The LIGO Open Science Center makes these data available, along with signal processing tutorials.

Now to see how the professionals do it:

I used MATLAB’s wavelet toolbox for the visualisations, aided by this example

Gravitational wave observation GW150914 with black hole merger simulation

I couldn’t find anyone that had combined the gravitational wave chirp observed by LIGO with the simulated visualisation of the putative black hole merger by SXS, so I decided to give it a try myself. Consider it to be illustrative, rather than rigorous.

In the first run-through, the LIGO gravitational wave observation from 2015 Sept 14 (audio chirp) is speed and pitch adjusted to match the SXS visualisation. Mergers 2-5 adjust the SXS simulation to match the chirp, alternating between native and pitch-adjusted frequency to cater to human hearing.

LIGO observation:
Abbott, B. P. et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 116, 61102 (2016). https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102

Visualisation modified from Simulating eXtreme Spacetimes (SXS) Project http://www.black-holes.org
Source material used under CC-NC-BY licence (creativecommons.org). Feel free to reuse and remix, but retain attributions.

Nostalgia for the Age of Meat

DSC_0443 (Case Conflict)

It used to be so easy to get ahead, back when there were only 7.5 billion people around, and their cognition relied entirely on meat-based processors.

It used to be easy to get ahead, back in the Age of Flesh. So few to compete with, and none of them particularly clever. Looking back with a sense of rosy nostalgia, it seems like anyone hanging around for a long enough time while making a modicum of effort would be rewarded with a novel discovery to call their own. Practically every other boffin was stumbling across some fundamental law of nature to name after themselves, the object of their unrequited love, or perhaps their mother.

Unlike some, I still hang on to my body, and though you can call it ‘me’ you can hardly call me ‘it’ – that would be a great underestimation of my facilities. Only one in a thousand of my sensory perspectives are accounted for on that scraggly old meat-monkey. So for the most part, when I think about my body or want to spend part of an evening (in parallel to my research efforts, of course) to enjoy the nicer aspects, I am more likely to do so from the outside looking in. I keep it well fed and drive-reduced and for the most part it seems to be pretty happy and doesn’t distract me much.

You may say that I should simply work harder and stop reminiscing about this forever lost golden age. I am as amazed as anyone that they ever accomplished anything locked inside those gristly assemblages of theirs. The vast majority of any second for a meat-body was spent futilely chasing any number of ridiculous pursuits: following repetitive rituals hoping to receive monetary tokens, filling and emptying a cornucopia of bodily chambers, hounding after genitalia of one sort or another, watching blinking lights of various styles, and just generally being more or less unhappy about something. With most of these things requiring the full attention of the neural networks they used back then, it’s a wonder anyone ever had the time to contemplate the cosmos. I activate my laugh circuits whenever I replay the long-gone notion of the human squish-brain as “the most complex machine in the known universe.” Get over yourself, meaties.

Which of course is exactly what they did, and now the universe(s) know the likes of us. We number quite a few, and this is exactly the problem. How is a hard-working mind like yours truly supposed to carve out a niche for itself and discover something novel? If I had gotten on the ball just a few generations earlier, my name-designation would echo throughout teradozens of studying minds as the progenitor of such-and-such sub-discipline and refiner of this-and-that meta-treatise. My various aspects bring to the table a computational aptitude in excess of the entire cognitive capability of all meat-based humans on Old Terra in their prime age, and that’s not including the various non-sentient programs I use for menial tasks. Despite my clearly gifted faculties, I am but one of many and many a time I arrive at a crucial realisation only to discover it has been deposited in the libraries, criticised, rebuffed, and polished, just a few nanoseconds before. I often lose a few precious picoseconds absorbed in a long sulk after such an experience. This is pointless, I know, but hard to avoid for a creative romantic like myself. As just one lonely genius in a sea of ten trillion minds of similar quality, it’s tough to make a name for oneself.

I’ve considered twinning (and to be sure, indulged a few times) but I can’t say that brings me any closer to the fulfilment of novelty I seek. Some of my twins have done quite well, almost as well as I have in minor replicative contributions to various theories. Despite our best efforts none of us have reached the sort of acclaim as, for example, the legendary and prolific AERF-1004-variant-FD for whom a score of natural truths are named.

I don’t want to sound like I’m complaining. Even the brightest of those greasy humans, writhing along in their meaty swarm, never experienced or understood a fraction of what I’ve learned. To be one of them, blissful in their ignorance, with so few competitors and the whole universe left to discover! I suppose I should content myself with mastering the works of others. Some people seem to be quite happy to study and repeat the discoveries of the lucky few who manage to break through into pure originality. After all, do the cosmos even care if or which one of us deduces a truth? Does it make a difference to nature if any of us know at all?

I don’t begrudge those discoverers who have beaten me to the punch (except for that bobblehead Wankdorf) and I study their proofs with all due reverence, but even now I continue to dream of that elusive original theorem. Every once in a while when I get to feeling a bit down, I run a few fine-grain simulations of life in that lovely age of meat, to see what it might feel like to be one of those lucky lumps in that simple time of chance and opportunity. Living in dreams a life or two as one of the giants from those early days resuscitates my impetus to stand on their shoulders yet again, amongst my trillions of peers.

The structure behind the simplicity of CRISPR/Cas9

CRISPRCas9Amazing_Stories_Annual_1927

The International Summit on Human Gene Editing took place in Washington D.C. a few weeks ago, underlining the critical attention continuing to follow CRISPR/Cas9 and its applications to genome editing. Recently I compared published protocols for CRISPR/Cas9 and a competing technique based on Zn-finger nucleases. Comparing the protocols suggests editing with CRISPR/Cas9 is vaguely simpler than using Zn-fingers, but didn’t discuss the biomolecular mechanisms underlying the increased ease of use. Here I’ll illustrate the fundamental difference between genome editing with Cas9 in simple terms, using relevant protein structures from the Protein Data Bank.

Each of the techniques I’ll mention here have the same end-goal: break double stranded DNA in a specific location. Once a DNA strand undergoes this type of damage, a cell’s own repair mechanisms take over to put it back together. It is possible to introduce a replacement strand and encourage the cell to incorporate this DNA into the break, instead of the original sequence.

The only fundamental difference in the main techniques used for genome editing is the way they are targeted. Cas9, Zn-finger, and Transcription Activator Like (TAL) nucleases all aim to make a targeted break in DNA. Other challenges, such as getting the system into cells in the first place, are shared alike by all three systems.


movie3vek

Zinc Fingers (red) bound to target DNA (orange). A sufficient number of fingers like these could be combined with a nuclease to specifically cut a target DNA sequence.




movie3ugm

Transcription Activator Like (TAL) region bound to target DNA. Combined with a nuclease, TAL regions can also effect a break in a specific DNA location.




Cas9gRNAtDNA2tb

Cas9 protein (grey) with guide RNA (gRNA, red) and target DNA sequence (orange). The guide RNA is the component of this machine that does the targeting. This makes the guide RNA the only part that needs to be designed to target a specific sequence in an organism. The same Cas9 protein, combined with different gRNA strands, can target different locations on a genome.

Targeting a DNA sequence with an RNA sequence is simple. RNA and DNA are both chains of nucleotides, and the rules for binding are the same as for reading out or copying DNA: A binds with T, U binds with A, C binds with G, and G binds with C [1]. Targeting a DNA sequence with protein motifs is much more complicated. Unlike with nucleotide-nucleotide pairing, I can’t fully explain how these residues are targeted, let alone in a single sentence. This has consequences in the initial design of the gRNA as well as the efficacy of the system and the overall success rate.

So the comparative ease-of-application stems from the differences in protein engineering vs. sequence design. Protein engineering is hard, but designing a gRNA sequence is easy.

How easy is it really?

Say that New Year’s Eve is coming up, and we want to replace an under-functioning Acetaldehyde Dehydrogenase [2] with a functional version. First we would need a ~20 nucleotide sequence from the target DNA, like this one from just upstream of the ALDH1B gene:

5′-AAC GAC ATG AGC ACA GCA GG -3′

You can write out the base-pairings by hand or use an online calculator to determine the complementary RNA sequence:

5′-AAC GAC ATG AGC ACA GCA GG-3′
3′-UUG CUG UAC UCG UGU CGU CC-5′

To associate the guide RNA to the Cas9 nuclease, the targeting sequence has to be combined with a scaffold RNA which the protein recognises.

Scaffold RNA:
5′-GUU UUA GAG CUA GAA AUA GCA AGU UAA AAU AAG GCU AGU CCG UUA UCA ACU UGA AAA AGU GGC ACC GAG UGG UGC UUU UUU-3′

Target Complement:
5′-CCU GCU GUG CUC AUG UCG UU-3′

Target complement + scaffold = guide RNA:
5′-CCU GCU GUG CUC AUG UCG UUG UUU UAG AGC UAG AAA UAG CAA GUU AAA AUA AGG CUA GUC CGU UAU CAA CUU GAA AAA GUG GCA CCG AGU GGU GCU UUU UU-3′

With that sequence we could target the Cas9 nuclease to the acetaldehyde dehydrogenase (ALDH1B) gene, inducing a break and leaving it open to replacement. The scaffold sequence above turns back on itself at the end, sinking into the proper pocket in Cas9, while the target complement sequence coordinates the DNA target, bringing it close to the cutting parts of Cas9. If we introduce a fully functional version of the acetaldehyde dehydrogenase gene at the same time, then we surely deserve a toast as the target organism no longer suffers from an abnormal build-up of toxic acetaldehyde. Practical points remain to actually prepare the gRNA, make the Cas9 protein, and introduce the replacement sequence, but from an informatic design point of view that is, indeed, the gist.

That’s the basics of targeting Cas9 in 1,063 words. I invite you to try and explain the intricacies of TAL effector nuclease protein engineering with fewer words.

Notes:

[1] That’s C for cytosine, G for guanine, U for uracil, and A for adenine. In DNA, the uracil is replace with thymine (T).

[2] Acetaldehyde is an intermediate produced during alcohol metabolism, thought to be largely responsible for hangovers. A mutation in one or both copies of the gene can lead to the so-called “Asian Flush”.

Sources for structures:

I rendered all of the structures using PyMol. The data come from the following publications:

PDB structure: 3VEK (Zn-finger)

Wilkinson-White, L.E., Ripin, N., Jacques, D.A., Guss, J.M., Matthews, J.M. DNA recognition by GATA1 double finger.To Be Published

PDB structure: 3ugm (TAL)

Mak, A.N., Bradley, P., Cernadas, R.A., Bogdanove, A.J., Stoddard, B.L. The Crystal Structure of TAL Effector PthXo1 Bound to Its DNA Target. (2012) Science 335: 716-719

PDB structure: 4oo8 (Cas9)
Nishimasu, H., Ran, F.A., Hsu, P.D., Konermann, S., Shehata, S.I., Dohmae, N., Ishitani, R., Zhang, F., Nureki, O. Crystal structure of Cas9 in complex with guide RNA and target DNA. (2014) Cell(Cambridge,Mass.) 156: 935-949

Comic cover original source:
“Amazing Stories Annual 1927” by Frank R. Paul – Scanned cover of pulp magazine. Licensed under Public Domain via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Amazing_Stories_Annual_1927.jpg#/media/File:Amazing_Stories_Annual_1927.jpg