A Skeptic Over Coffee: Young Blood Part Duh

Does this cloudy liquid hold the secret to vitality in your first 100 years and beyond? I can’t say for sure that it doesn’t. What I can say is that I would happily sell it to you for $8,000.

Next time someone tries to charge you a premium to intravenously imbibe someone else’s blood plasma, you have my permission to tell them no thanks. Unless there’s a chance that it is fake, then it might be worth doing.

Californian company Ambrosia LLC has been making the rounds in publications like the New Scientist hype-machine to promote claims that their plasma transfusions show efficacy at treating symptomatic biomarkers of aging. Set up primarily to exploit rich people by exploiting younger, poorer people on the off chance that the Precious Bodily Fluids of the latter will invigorate the former, the small biotech firm performed a tiny study of over-35s receiving blood plasma transfusions from younger people. It’s listed on clinicaltrials.gov and everything.

First of all, to determine the efficacy of a treatment it’s important that both the doctors and the patients are blinded to whether they are administering/being administered the active therapeutic. That goes all the way up the line from the responsible physician to the phlebotomist to the statistician analyzing the data. But to blind patients and researchers the study must include a control group receiving a placebo treatment, which in this case there was not. So it’s got that going for it.

To be fair, this isn’t actually bad science. For that to be true, it would have to be actual science. Not only does a study like this require a control to account for any placebo effect*, but the changes reported for the various biomarkers may be well within common fluctuations.

Finally, remember that if you assess 20 biomarkers with the common confidence cutoff of p=0.05, chances are one of the twenty will show a statistical difference from baseline. That is the definition of a p-value at that level: a 1 in 20 chance of a difference being down to random chance. Quartz reports the Ambrosia study looked at about 100 different biomarkers and mentions positive changes in 3 of them. I don’t know if they performed statistical tests at a cutoff level of 0.05, but if so you should expect on average 5 of 100 biomarkers in a screen to show a statistical difference. This isn’t the first case of questionable statistics selling fountain of youth concepts.

All of this is not to say that the experiments disprove the positive effects of shooting up teenage PBFs. It also generated zero conclusive evidence against the presence of a large population of English teapots in erratic orbits around Saturn.

You could conclude by saying “more scientific investigation is warranted” but that would imply the work so far was science.

* The placebo effect can even apply to as seemingly objective a treatment as surgery. Take this 2013 study that found no statistical difference in the outcomes of patients with knee problems treated with either arthroscopic surgery or a surgeon pretending to perform the surgery.

I

​What the cornerstone of any futuristic transportation mix should be.

The future has always promised exciting new forms of transport for the bustling hither and thither of our undoubtedly jumpsuit-wearing, cyborg selves. From the outlandish (flying cars) to the decidedly practical (electric cars), a better way of getting about is always just around the corner. Workers in the United States spend about 26 minutes twice a day on their commutes, and for most people this means driving. What’s worse, the negative effect of a long commute on life satisfaction is consistently under-estimated. Premature deaths in the United States due to automobile accidents and air pollution from vehicles are about 33,000 and an estimated 58,000 yearly, respectively. Add in all the costs associated with car ownership and road maintenance (not to mention the incalculable cost of automobiles’ contribution to the potentially existential threat of climate change) and the picture becomes clear: cars aren’t so much a convenient means of conveyance serving the humans they carry, but rather a demanding taskmaster that may be the doom of us all. There must be something better awaiting us in the transportation wonders of tomorrow.

What if we came up with a transportation mode that is faster than taking the bus, costs less than driving, and improves lifespan? What if it also happened to be the most efficient means of transport known? Anything offering up that long list of pros should be a centerpiece of any transportation blend, what wonder of future technology could I possibly be talking about?

I’m writing, of course, about the humble bicycle.

Prioritizing exotic transportation projects like Elon Musk’s hyperloop is like inventing a new type of ladder to reach the highest branches, all the while surrounded on all sides by drooping boughs laden with low-hanging fruit. In a great example of working harder, not smarter, city planners in the U.S. strive tirelessly to please our automobile overlords. Everyone needs a car to get to work and the supermarket, because everything is far apart, and everything is so far apart because everyone drives everywhere anyway. All the parking spaces and wide lanes push everything even further apart in a commuting nightmare feedback loop.

It doesn’t have to be that way, and it’s not too late to change. Consider the famously bikeable infrastructure of the Netherlands, where the bicycle is king. Many people take the purpose-built bike lanes for granted and assume they’ve always been there, but in fact they are a result of deliberate activism leading to a broad change in transportation policy beginning in the seventies. Likewise, the servile relationship many U.S. cities maintain with cars is not set in stone, and, contrary to popular belief, fuel taxes and registration fees don’t cover the costs

Even if every conventional automobile was replaced tomorrow with a self-driving electric car, a bicycle would still be a more efficient choice. The reason comes down to simple physics: a typical bike’s ~10 kgs is a fraction of the mass of the average rider, so most of the energy delivered to the pedals goes toward moving the human cargo. A car (even a Tesla) has to waste most of its energy moving the car itself. The only vehicle that has a chance of besting the bicycle in terms of efficiency is an electric-assist bicycle, once you factor in the total energy costs of producing and shipping the human fuel (food), but even that depends on where you buy your groceries [pdf].

Bicycles have been around in more or less modern form for over a hundred years, but the right tool isn’t necessarily the newest. The law of parsimony posits that the simplest solution that suffices is generally the best, and for many of our basic transport needs that means a bicycle. It’s about time we started affording cycling the respect it deserves as a central piece of our future cities and towns. Your future transportation experience may mean you’ll go to the office in virtual reality, meet important clients by hybrid dirigible, and ship supplies to Mars by electric rocket, but you’ll pick up the groceries by bicycle on the way home from the station.

Image sources used for illustrations:

Fat bike CC SA BY Sylenius

Public Domain:

Tire tracks

Lunar lander module>

Apollo footprint

Trolling a Neural Network to Learn About Color Cues

Neural networks are breaking into new fields and refining roles in old ones on a day-to-day basis. The main enabling breakthrough in recent years is the ability to efficiently train networks consisting of many stacked layers of artificial neurons. These deep learning networks have been used for everything from tomographic phase microscopy to learning to generate speech from scratch.

A particularly fun example of a deep neural net comes in the form of one @ColorizeBot, a twitter bot that generates color images from black and white photographs. For landscapes, portraits, and street photography the results are reasonably realistic, even if they do fall into an uncanny valley that is eery, striking, and often quite beautiful. I decided to try and trick @ColorizeBot to learn something about how it was trained and regularized, and maybe gain some insights into general color cues. First, a little background on how @ColorizeBot might be put together.

According to the description on @ColorizeBot’s Twitter page:

I hallucinate colors into any monochrome image. I consist of several ConvNets and have been trained on millions of images to recognize things and color them.

This tells us that CB is indeed an artificial neural network with many layers, some of which consist of convolutional layers. These would be sharing weights and give deep learning the ability to discover features from images rather than relying on a conventional machine vision approach of manual extraction of image features to train an algorithm. This gives CB the ability to discover important indicators of color that their handler wouldn’t necessarily have thought of in the first place. I expect CB was trained as a special type of autoencoder. Normally, an autoencoding neural network has the same data on both the input and output side and iteratively tries to reproduce the input at the output in an efficient manner. In this case instead of producing a single grayscale image at the output, the network would need to produce three versions, one image each for red, green, and blue color channels. Of course, it doesn’t make sense to totally throw away the structure of the black and white image and the way the authors include this a priori knowledge to inform the output must have been important for getting the technique to work well and fast. CB’s twitter bio claims to have trained on millions of photos, and I tried to trick it into making mistakes and revealing something about it’s inner workings and training data. To do this, I took some photos I thought might yield interesting results, converted them to grayscale, and sent them to @ColorizeBot.

The first thing I wanted to try is a classic teaching example from black and white photography. If you have ever thought about dusting off a vintage medium format rangefinder and turning your closet into a darkroom, you probably know that a vibrant sun-kissed tomato on a bed of crisp greens looks decidedly bland on black and white film. If one wishes to pursue the glamorous life of a hipster salad photograher, it’s important to invest in a few color filters to distinguish red and green. In general, red tomatoes and green salad leaves have approximately the same luminance (i.e. brightness) values. I wrote about how this example might look through the unique eyes of cephalapods, which can perceive color with only one color type of photoreceptor. Our own visual system can only see contrast between the two types of object by their color, but if a human viewer looks at a salad in a dark room (what? midnight is a perfectly reasonable time for salad), they can still tell what is and is not a tomato without distinguishing the colors. @ColorizeBot interprets a B&W photo of cherry tomatoes on spinach leaves as follows:

c2sel44vqaagemw-jpg-large

This scene is vaguely plausible. After all, it some people may prefer salads with unripe tomatoes. Perhaps meal-time photos from these people’s social media feeds made it into the training data for @ColorizeBot. What is potentially more interesting is that this test image revealed a spatial dependence- the tomatoes in the corner were correctly filled in with a reddish hue, while those in the center remain green. Maybe this has something to do with how salad images used to train the bot were framed. Alternatively, it could be that the abundance of leaves surrounding the central tomatoes provide a confusing context and CB is used to recognizing more isolated round objects as tomatoes. In any case it does know enough to guess than spinach is green and some cherry tomatoes are reddish.

Next I decided to try and deliberately evoke evidence of overfitting with an Ishihara test. These are the mosaic images of dots with colored numbers written in the pattern. If @ColorizeBot scraped public images from the internet for some of its training images, it probably came across Ishihara tests. If the colorizer expects to see some sort of numbers (or any patterned color variation) in a circle of dots that looks like a color-blindness test, it’s probably overfitting; the black and white image by design doesn’t give any clues about color variation.

c2se-teveae2_ay-jpg-large

That one’s a pass. The bot filled in the flyer with a bland brown coloration, but didn’t overfit by dreaming up color variation in the Ishihara test. This tells us that even though there’s a fair chance the neural net may have seen an imagef like this before, it doesn’t expect it every time it sees a flat pattern of circles. CB has also learned to hedge its bets when looking at a box of of colored pencils, which could conceivably be a box of brown sketching pencils.

c2seviwviaa87xo-jpg-large

What about a more typical type of photograph? Here’s an old truck in some snow:

c2scawfveaallw4-jpg-large

CB managed to correctly interpret the high-albedo snow as white (except where it was confused by shadows), and, although it made the day out to be a bit sunnier than it actually was, most of the winter grass was correctly interpreted as brown. But have a look on the right hand side of the photo, where apparently CB decided the seasons changed to a green spring in the time it takes to scan your eyes across the image. This is the sort of surreal, uncanny effect that CB is capable of. It’s more pronounced, and sometimes much more aesthetic, on some of the fancier photos on CB’s Twitter feed. The seasonal transformation from one side of the photo tells us something about the limits of CB’s interpretation of context.

In a convolutional neural network, each part of an input image is convolved with kernels of a limited size, and the influence of one part of the image on its neighbors is limited to some degree by the size of the largest kernels. You can think of these convolutional kernels as smaller sub-images that are applied to the full image as a moving filter, and they are a foundational component of the ability of deep neural networks to discover features, like edges and orientations, without being explicitly told what to look for. The results of these convolutional layers propagate deeper through the network, where the algorithm can make increasingly complex connections between aspects of the image.

In the snowy truck and in the tomato/spinach salad examples, we were able to observe @ColorizeBot’s ability to change it’s interpretation of the same sort of objects across a single field of view. If you, fellow human, or myself see an image that looks like it was taken in winter, we include in our expectations “This photo looks like it was taken in winter, so it is likely the whole scene takes place in winter because that’s how photographs and time tends to work.” Likewise, we might find it strange for someone to have a preference for unripe tomatoes, but we’d find it even stranger for someone to prefer a mixture of ripe-ish and unripe tomatoes on the same salad. Maybe the salad maker was an impatient type suffering from a tomato shortage, but given a black and white photo that wouldn’t be my first guess on how it came to be based on the way most of the salads I’ve seen throughout my life have been constructed. In general we don’t see deep neural networks like @Colorizebot generalizing that far quite yet, and the resulting sense of context can be limited. This is different from generative networks like Google’s “Inception” or style transfer systems like Deepart.io, which perfuse an entire scene with a cohesive theme (even if that theme is “everything is made of duck’s eyes”).

Finally, what does CB think of theScinder’s logo image? It’s a miniature magnetoplasmadynamic thruster built out of a camera flash and magnet wire. Does CB have any prior experience with esoteric desktop plasma generators?

c29xshxviaa2_g3

That’ll do CB, that’ll do.

Can’t get enough machine learning? Check out my other essays on the topic

@ColorizeBot’s Twitter feed

@CtheScinder’s Twitter feed

All the photographs used in this essay were taken by yours truly, (at http://www.thescinder.com), and all images were colorized by @ColorizeBot.

And finally, here’s the color-to-B&W-to-color transformation for the tomato spinach photo:

tomatotrickery

Journalistic Phylogeny of the Silicon Valley Apocalypse

For some reason, doomsday mania is totally in this season.

In 2014 I talked about the tendency of internet writers to regurgitate the press release for trendy science news. The direct lineage from press release to press coverage makes it easy for writers to phone it in: university press offices essentially hand out pre-written sensationalist versions of recent publications. It’s not surprising that with so much of the resulting material in circulation taking text verbatim from the same origin, it is possible to visualize the similarities as genetic sequences in a phylogenetic tree.

Recently the same sort of journalistic laziness reared its head as stories about the luxury doomsday prepper market. Evan Osnos at The New Yorker wrote an article describing the trend in Silicon Valley to buy up bunkers, bullets, and body armor-they think we’ll all soon rise up against them following the advent of A.I. Without a press release to serve as a ready-made template, other outlets turned to reporting on the New Yorker story itself as if it were a primary source. This is a bit different than copying down the press release as your own, and the inheritance is not as direct. If anything, this practice is even more hackneyed. At least a press office puts out their releases with the intention that the text serves as material for coverage so that the topic gets as much circulation as possible. Covering another story as a primary source, rather than writing an original commentary or rebuttal, is just a way to skim traffic off a trend.

In any case, I decided to subject this batch of articles to my previous workflow: converting the text to a DNA sequence with DNA writer by Lensyl Urbano, aligning the sequences with MAFFT and/or T-Coffee Expresso, and using the distances from the alignment to make a tree in Phyl.io. Here’s the result:

svatree

Heredity isn’t as clear-cut as it was when I looked at science articles: there’s more remixing in this case and we see that in increased branch distances from the New Yorker article to most of the others. Interestingly, there are a few articles that are quite close to each other, much more so than they are to the New Yorker article. Perhaps this rabbit hole of quasi-plagiarism is even deeper than it first appears, with one article covering another article about an article about an article. . .

In any case, now that I’ve gone through this workflow twice, the next time I’ll be obligated to automate the whole thing in Python.

You can tinker with the MAFFT alignment, at least for a while, here:
http://mafft.cbrc.jp/alignment/server/spool/_out1701310631s24824093CAxLP69W2ZebokqEy0TuG.html

My tree:
((((((((((((1_bizJournals:0.65712,(3_newYorker:0.44428,13_breitbart:0.44428):0.21284):0.11522,10_vanityFair:0.77234):0.04207,6_offTheGridNews:0.8441):0.05849,17_EdgyLabs:0.87290):0.04449,14_cnbc_:0.91739):0.02664,2_guardian:0.94403):0.02047,16_RecodeDotNet:0.96451):0.02541,(7_qzDotCom:0.95494,15_npr:0.95494):0.03498):0.00361,8_theIETdotCom:0.99353):0.01310,18_PedestrianDotTV:1.00664:0.03785,((9_ukBusinessInsider:0.06443,12_yahoo:0.06443):0.96008,19_sundayMorningHerald:1.02451):0.01997):0.00953,11_wiredGoogleCatsOUTGROUP3:1.05401)

Sources:

https://www.theguardian.com/technology/2017/jan/29/silicon-valley-new-zealand-apocalypse-escape
http://uk.businessinsider.com/silicon-valley-billionaires-apocalypse-preppers-2017-1?r=US&IR=T
http://www.vanityfair.com/news/2017/01/silicon-valley-is-preparing-for-the-apocalypse
http://www.bizjournals.com/sanjose/news/2017/01/24/apocalypse-now-silicon-valley-elite-says-theyre.html
http://www.newyorker.com/magazine/2017/01/30/doomsday-prep-for-the-super-rich

https://finance.yahoo.com/news/silicon-valley-billionaires-preparing-apocalypse-202000443.html

https://eandt.theiet.org/content/articles/2017/01/apocalypse-2017-silicon-valley-and-beyond-worried-about-the-end-of-the-world/
http://www.offthegridnews.com/extreme-survival/50-percent-of-silicon-valley-billionaires-are-prepping-for-the-apocalypse/
https://qz.com/892543/apocalypse-insurance-reddits-ceo-venture-capitalists-and-others-in-silicon-valley-are-preparing-for-the-end-of-civilization/

https://www.wired.com/2012/06/google-x-neural-network/
a href=”
http://www.breitbart.com/tech/2017/01/24/silicon-valley-elites-privately-turning-into-doomsday-preppers/”>
http://www.breitbart.com/tech/2017/01/24/silicon-valley-elites-privately-turning-into-doomsday-preppers/
http://www.cnbc.com/2017/01/25/the-super-rich-are-preparing-for-the-end-of-the-world.html
http://www.npr.org/2017/01/25/511507434/why-some-silicon-valley-tech-executives-are-bunkering-down-for-doomsday
http://www.recode.net/2017/1/23/14354840/silicon-valley-billionaires-prepping-survive-underground-bunkers-new-yorker
https://edgylabs.com/2017/01/30/doomsday-prepping-silicon-valley/
https://www.pedestrian.tv/news/tech/silicon-valley-ceos-are-terrified-of-the-apocalyps/ba4c1c5d-f1c4-4fd7-8d32-77300637666e.htm
http://www.smh.com.au/business/world-business/rich-silicon-valley-doomsday-preppers-buying-up-new-zealand-land-20170124-gty353.html

Teaching a Machine to Love  XOR

xorsketch

The XOR function outputs true if one of the two inputs are true

The exclusive or function, also known as XOR (but never going by both names simultaneously), has a special relationship to artificial intelligence in general, and neural networks in particular. This is thanks to a prominent book from 1969 by Marvin Minsky and Seymour Papert entitled “Perceptrons: an Introduction to Computational Geometry.” Depending on who you ask, this text was single-handedly responsible for the AI winter due to its critiques of the state of the art neural network of the time. In an alternative view, few people ever actually read the book but everyone heard about it, and the tendency was to generalize a special-case limitation of local and single-layer perceptrons to the point where interest and funding for neural networks evaporated. In any case, thanks to back-propagation, neural networks are now in widespread use and we can easily train a three-layer neural network to replicate the XOR function.

In words, the XOR function is true for two inputs if one of them, but not both, is true. When you plot the XOR as a graph, it becomes obvious why the early perceptron would have trouble getting it right more than half the time.

sketch2dxor

There’s not a way to draw a straight 2D line on the graph and separate the true and false outputs for XOR, red and green in the sketch above. Go ahead and try. The same is going to be true trying to use a plane to separate a 3D version and so on to higher dimensions.

sketch3dxor

That’s a problem because a single layer perceptron can only classify points linearly. But if we allow ourselves a curved boundary, we can separate the true and false outputs easily, which is exactly what we get by adding a hidden layer to a neural network.

xorwhiddenlayer

The truth-table for XOR is as follows:

Input Output
00 0
01 1
10 1
11 0

If we want to train a neural network to replicate the table above, we use backpropagation to flow the output errors backward through the network based on the neuron activations of each node. Based on the gradient of these activations and the error in the layer immediately above, the network error can be optimized by something like gradient descent. As a result, our network can now be taught to represent a non-linear function. For a network with two inputs, three hidden units, and one output the training might go something like this:

trainingxor

Update (2017/03/02) Here’s the gist for making the gif above:

­A Skeptic Over Coffee – Young Blood

dsc_0005

A tragic tale of a star-crossed pair,
science vs. a journalist’s flare

When reporting on scientific topics, particularly when describing individual papers, how important is it for the popular coverage to have anything to do with the source material? Let’s take a look at a recent science paper from Justin Rebo and others in Nature Communications and the accompanying coverage by Claire Maldarelli at Popular Science

Interest in parabiosis has increased recently due to coverage of scientific papers describing promising results in mice and the high-profile of some parabiosis enthusiasts. Parabiosis, from the Latin for “living beside”, typically has involved stitching two mice together. After a few days the fused tissue provides blood exchange through a network of newly formed capillaries.

The most recent investigation into the healing effects of youthful blood exchange from Rebo et al. expands the equipment list used for blood exchange beyond the old technique of duct-taping two animals together surgically joining two animals. Instead of relying on the animals to grow new capillary beds for blood exchange to occur, the authors of the new paper used a small pump to exchange a few drops of blood at a time until both mice had approximately the same proportion of their own blood and that of a donor and vice-versa.

According to the coverage from Popular Science:

While infusing blood from a younger mouse into an older mouse had no effect on the elderly mouse in the latest study, infusing blood from an older mouse into a younger one caused a host of problems in organs and other tissues.

Just a few paragraphs further Maldarelli quotes Conboy (last author on the paper) as saying “‘This study tells us that young blood, by itself, cannot work as medicine’.” In contrast, in the paper the authors state that “Importantly, our work on rodent blood exchange establishes that blood age has virtually immediate effects on regeneration of all three germ layer derivatives.” and later that “. . . extracorporeal blood manipulation provides a modality of rapid translation to human clinical intervention.”[1] There seems to be a bit of disagreement between the version of Conboy on the author list of the scientific article and the version of Conboy quoted in the PopSci coverage of the same article.

We also learned from Maldarelli that the tests reported in the paper were performed a month after completing the blood exchange procedure, but the longest duration from blood exchange to the experiment’s end (sacrifice for post-mortem tissue analysis) was 6 days after blood exchange.

I came across the PopSci coverage when it appeared on a meta-news site that highlights popular web articles, so it’s safe to assume I wasn’t the first to read it. Shouldn’t the coverage of scientific articles reported in the lay press have more in common with the source material than just buzzwords? The science wasn’t strictly cut and dried: not every marker or metric responded in the same way to the old/young blood exchange, and while I agree that we shouldn’t be encouraging anyone to build a blood-exchange rejuvenation pod in their garage, the findings of the article fell a long way from the conclusions reported in the lay-article: that young blood had no effect on the physiology of old mice. This is to say nothing about the quality of the paper itself and the confidence we should assign to the experimental results in the first place: with 12 mice total* and a p-value cutoff of 0.05 (1 out of every 20 experiments will appear significant at random), I’d take the original results with a grain of salt as well.

This is the face of science we show the public, and it’s unreliable. It is no easy task for journalists to accurately report and interpret scientific research. Deadlines are tight, and writers face competition and pressure from cheap amateur blogs and regurgitation feeds. “What can I do to help?” you ask. As a consumer of information you can demand scientific literacy in the science news you consume. Ask for writers to convey confidence and probability in a consisent way that can be understood and compared to other results by non-specialists. As a bare minimum, science and the press that covers it should at least have more in common than the latest brand of esoteric jargon.

If we only pay attention to the most outlandish scientific results, then most scientific results will be outlandish.

*The methods describe a purchase of 6 old and 6 young mice. However, elsewhere in the paper the groups are said to contain 8 mice each. Thus it is not clear how many mice in total were used in these experiments, and how they managed to create 12 blood exchange pairings for both control and experimental groups without re-using the same mice.

[1] Rebo, J. et al. A single heterochronic blood exchange reveals rapid inhibition of multiple tissues by old blood. Nat. Commun. 7, 13363 doi: 10.1038/ncomms13363 (2016).

A skeptic over coffee: who owns you your data?

AskDNA

“Everyone Belongs to Everyone Else”

-mnemomic marketing from Aldous Huxley’s Brave New World

A collaboration between mail-order genomics company 23andMe and pharmaceutical giant Pfizer reported 15 novel genes linked to depression in a genome-wide association study published in Nature. The substantial 23andMe user base and relative prevalence of the mental illness provided the numbers necessary to find correlations between a collection of single nucleotide polymorphisms (SNPs) and the condition.

This is a gentle reminder that even when the service isn’t free, you very well may be the product. It’s not just Google and Facebook whose business plans hinge on user data. From 23andMe’s massive database of user genetic information to Tesla’s fleet learning Autopilot (and many more subtle examples that don’t make headlines), you’re bound to be the input to a machine learning algorithm somewhere.

On the one hand, it’s nice to feel secure in a little privacy now and again. On the other, blissful technological utopia? If only the tradeoffs were so clear. Note that some (including bearded mo. bio. maestro George Church) say that privacy is a thing of the past, and that openness is the key (the 23andMe study participants consented that their data be used for research). We’ve known for a while that it’s possible to infer the sources of anonymous genome data from publicly available metadata.

The data of the every person are fueling the biggest changes of our time in transportation, technology, healthcare and commerce, and there’s a buck (or a trillion) to be made there. It remains to be seen if the benefits will mainly be consolidated by those who already control large pieces of the pie or to fall largely to the multitudes making up the crust (with plenty of opportunities for crumb-snatchers). On the bright side, if your data make up a large enough portion of machine learning inputs for the programs that eventually coalesce into an omnipotent AI, maybe there’ll be a bit of you in the next generation superorganism.

Through the strange eyes of a cuttlefish

A classic teaching example in black and white film photography courses is the tomato on a bed of leaves. Without the use of a color filter, the resulting image is low-contrast and visually un-interesting. The tomato is likely to look unnaturally dark and lifeless next to similarly dark leaves; although in a color photograph the colors make for a stark contrast, in fact the intensity values of the red and green of tomato fruit and leaves are nearly the same. The use of a red or green filter can attenuate the intensity of one of the colors, making it possible for an eager photographer to undertake the glamorous pursuit of fine-art salad photography.

Caprese_cherry_tomatoesBWColourComparison

The always clever cephalopods (smart enough to make honorary vertebrate status in UK scientific research) somehow manage to pull off a similar trick without the use of a photographer’s color filters. Marine biologists have been flummoxed for years by the ability of squid, cuttlefish, and octopuses* to effect exact color camouflage in complex environments, and their impressive use of color patterning in hunting and inter-species communication. The paradox is that their eyes (cephalopods, not marine biologists) only contain a single type of photoreceptor, rather than the two or more different color photoreceptors of humans and other color sensitive animals.

Berkeley/Harvard duo Stubbs & Son have put forth a plausible explanation for the age-old paradox of color camouflage in color-blind cephalopods. They posit that cephalopods use chromatic aberration and a unique pupil shape to distinguish colors. With a wide, w-shaped pupil, cephalopods potentially retain much of the color blurring of different wavelengths of light. Chromatic aberration is nothing more than color-dependent defocus, and by focusing through the different colors it is theoretically possible for the many-limbed head-foots to use their aberrated eyes as an effective spectrophotometer, using a different eye length to sharply focus each color. A cuttlefish may distinguish tomato and lettuce in a very different way than a black and white film camera or human eyes.

tomatoRGBcuttleVision

A cuttlefish’s take on salad

A cuttlefish might focus each wavelength sequentially to discern color. In the example above, each image represents preferential focus for red, green, and blue from top to bottom. By comparing each image to every other image, the cephalopod could learn to distinguish the colorful expressions of their friends, foes, and environment. Much like our own visual system automatically filters and categorizes objects in a field of view before we know it, much of this perception likely occurs at the level of “pre-processing,” before the animal is acutely aware of how they are seeing.

cuttleVisionKalamar

How a cuttlefish might see itself

seaCottonComp

A view of the reef.

A typical night out through the eyes of a cuttlefish might look something like this:

There are distinct advantages to this type of vision in specialized contexts. Using only one type of photoreceptor, light sensitivity is increased compared to the same eye with multiple types of photoreceptors (ever notice how human color acuity falls off at night?) Mixed colors would look distinctly different, and, potentially, individual pure wavelength could be more accurately distinguished. In human vision we can’t tell the difference between an individual wavelength and a mix of colors that happen to excite our color photoreceptors in the same proportions as the pure color, but a cuttlefish might be able to resolve these differences.

On the other hand, the odd w-shaped pupil of cephalopods retains more imaging aberrations in than a circular pupil (check out the dependence of aberrations on the pupil radius in the corresponding Zernike polynomials to understand why). As a result, cephalopods would have slightly worse vision in some conditions as compared to humans with the same eye size. Mainly those conditions consist of living on land. Human eye underwater are not well-suited to the higher refractive index of water as compared to air. We would also probably need to incorporate some sort of lens hood (e.g. something like a brimmed hat) to deal with the strong gradient of light formed from light absorption in the water, another function of the w-shaped cephalopod pupil.

Studying the sensory lives of other organisms provides insight into how they might think, illuminating our own vision and nature of thought by contrast. We may still be a long ways off from understanding how it feels to instantly change the color and texture of one’s skin, but humans have just opened a small aperture into the minds of cuttlefish to increase our understanding of the nature of thought and experience.

How I did it
Ever image is formed by smearing light from a scene according to the Point Spread Function (PSF) of the imaging system. This is a consequence of the wave nature of light and the origins of the diffraction limit. In Fourier optics, the point spread function is the absolute value squared of the pupil function. To generate the PSF, I thresholded and dilated this image of a common cuttlefish eye (public domain from Wikipedia user FireFly5), before taking the Fourier transform and squaring the result. To generate the images and video mentioned above, I added differential defocus (using the Zernike polynomial for defocus) to each color channel and cycled through the result three monochromatic images. I used ImageJ and octave for image processing.

Sources for original images in order of appearance:

https://en.wikipedia.org/wiki/File:Cuttlefish_eye.jpg

https://commons.wikimedia.org/wiki/File:Caprese_cherry_tomatoes.JPG

https://en.wikipedia.org/wiki/File:Kalamar.jpg


https://en.wikipedia.org/wiki/Coral_reef#/media/File:Sea_Cotton.jpg

And Movie S2

*The plural of octopus has all the makings of another senseless ghif/gif/zhaif controversy. I have even heard one enthusiast insist on “octopodes”

Bonus Content:

RGBTest

Primary color disks.

In particular, defocus pseudocolor vision would make for interesting perceptions of mixed wavelengths. Observe the color disks above (especially the edges) in trichromatic and defocus pseudo-color.

camoCuttle03

cuttleW

The aperture used to calculate chromatic defocus.

Bonus content original image sources:

Swimming cuttlefish in camouflage CC SA BY Wikipedia user Konyali43 available at: https://commons.wikimedia.org/wiki/File:Camouflage_cuttlefish_03.jpg

The aperture I used for computing chromatic defocus is a mask made from the same image as the top image for this post: https://en.wikipedia.org/wiki/File:Cuttlefish_eye.jpg

2017/05/03 – Fixed broken link to Stubbs & Stubbs PNAS paper: http://www.pnas.org/content/113/29/8206.full.pdf

Perspective across scales (Spores molds and fungus* – recap)

*Actually just lichens and a moldy avocado

Take your right hand and cover your left eye. Keeping both eyes wide open, look at an object halfway across the room. You can now “see through your hand.”** Your brain compiles the world around you into a single image that we intuitively equate with media such as photography and video, but in fact (as evidenced by your brain ignoring your hand occluding half your visual inputs) this mental image of the world is compiled from two different perspectives. Therefore, the processing side of the human visual system is very well set up to interpret sterographic images. Some people complain about this but you can always file a bug report with reality if it becomes too much trouble.

Human binocular vision works pretty well at scales where the inter-ocular distance provides a noticeable difference in perspective, but not for objects that are very close or very far away. This is why distant mountains look flat [citation needed], and we don’t have good spatial intuition for very small objects, either. Stereophotography can improve our intuition of objects outside of the scales of our usual experience. By modifying the distance between two viewpoints, we can enhance our experience of perspective

For these stereo photos of lichens, I used a macro bellows with a perspective control lens. This type of lens is use for fixing vanishing lines in architectural photography or for making things look tiny that aren’t, but in this case it makes a useful tool for shifting perspective by a few centimetres.

Macr

stereoMacroLens1

It would probably be easier to move the sample instead.

stereoMacroSample

The images below require a pair of red blue filters or 3D glasses to shepherd a different perspective image into each eye, for spatial interpretation in your meat-based visual processor.

niceLichenAnaglyph

lichenAgainAnaglyph

anotherLichenAnaglyph

avocadoMold

curledLichenTM2016June

Another way to generate the illusion of dimensionality is parallax. This is a good way to judge depth when your eyes are on opposite sides of your head.

DSC_0042

DSC_0072

DSC_0051

curledLichenTM2016JuneGIF

**If you currently have use of only a single eye, the same effect can be achieved by holding the eye of a needle or other object thinner than your pupil directly in front of the active eye. This is something that Leonardo (the blue one) remarked on, and suggests the similarities in imaging with a relatively large aperture (like your dilated pupil) and an “image” reconciled from multiple images at different perspectives, e.g. as binocular vision.

Super Gravity Brothers

GW150914MorletSpec

The GW150914 blackhole merger event recorded by aLIGO, represented in a wavelet (morlet base) spectrogram. This spectrogram was based on the audio file released with the original announcment.

The data from the second detection, GW151226, is another beast entirely in that the signal is very much buried in the noise.

Raw data:

gw151226

Wavelet Spectrogram: gw151226CWTspec

The LIGO Open Science Center makes these data available, along with signal processing tutorials.

Now to see how the professionals do it:

I used MATLAB’s wavelet toolbox for the visualisations, aided by this example